00:00:00.001 Started by upstream project "autotest-per-patch" build number 122878 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.100 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.131 Fetching changes from the remote Git repository 00:00:00.132 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.170 Using shallow fetch with depth 1 00:00:00.170 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.170 > git --version # timeout=10 00:00:00.195 > git --version # 'git version 2.39.2' 00:00:00.195 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.195 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.195 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.588 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.601 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.614 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:05.614 > git config core.sparsecheckout # timeout=10 00:00:05.628 > git read-tree -mu HEAD # timeout=10 00:00:05.647 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:05.667 Commit message: "inventory/dev: add missing long names" 00:00:05.667 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:05.774 [Pipeline] Start of Pipeline 00:00:05.789 [Pipeline] library 00:00:05.790 Loading library shm_lib@master 00:00:05.791 Library shm_lib@master is cached. Copying from home. 00:00:05.808 [Pipeline] node 00:00:20.815 Still waiting to schedule task 00:00:20.815 Waiting for next available executor on ‘vagrant-vm-host’ 00:04:11.337 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:04:11.339 [Pipeline] { 00:04:11.353 [Pipeline] catchError 00:04:11.355 [Pipeline] { 00:04:11.373 [Pipeline] wrap 00:04:11.384 [Pipeline] { 00:04:11.393 [Pipeline] stage 00:04:11.395 [Pipeline] { (Prologue) 00:04:11.415 [Pipeline] echo 00:04:11.416 Node: VM-host-SM9 00:04:11.421 [Pipeline] cleanWs 00:04:11.428 [WS-CLEANUP] Deleting project workspace... 00:04:11.428 [WS-CLEANUP] Deferred wipeout is used... 00:04:11.434 [WS-CLEANUP] done 00:04:11.636 [Pipeline] setCustomBuildProperty 00:04:11.687 [Pipeline] nodesByLabel 00:04:11.689 Found a total of 1 nodes with the 'sorcerer' label 00:04:11.699 [Pipeline] httpRequest 00:04:11.702 HttpMethod: GET 00:04:11.703 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:04:11.704 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:04:11.706 Response Code: HTTP/1.1 200 OK 00:04:11.706 Success: Status code 200 is in the accepted range: 200,404 00:04:11.707 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:04:11.846 [Pipeline] sh 00:04:12.124 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:04:12.144 [Pipeline] httpRequest 00:04:12.149 HttpMethod: GET 00:04:12.149 URL: http://10.211.164.101/packages/spdk_08ee631f2287f76d54d98b6c2c35fd15767d0fbe.tar.gz 00:04:12.150 Sending request to url: http://10.211.164.101/packages/spdk_08ee631f2287f76d54d98b6c2c35fd15767d0fbe.tar.gz 00:04:12.150 Response Code: HTTP/1.1 200 OK 00:04:12.151 Success: Status code 200 is in the accepted range: 200,404 00:04:12.152 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_08ee631f2287f76d54d98b6c2c35fd15767d0fbe.tar.gz 00:04:14.274 [Pipeline] sh 00:04:14.551 + tar --no-same-owner -xf spdk_08ee631f2287f76d54d98b6c2c35fd15767d0fbe.tar.gz 00:04:17.842 [Pipeline] sh 00:04:18.120 + git -C spdk log --oneline -n5 00:04:18.120 08ee631f2 [TEST] autotest: collect nvmf coverage 00:04:18.120 3cdbb5383 test: avoid URING sock coverage degradation 00:04:18.120 9e0643d4a sock: add default impl override 00:04:18.120 bff75b6cb sock: check if impl is registered 00:04:18.120 fe2f92165 sock: replace sock impl priorities 00:04:18.138 [Pipeline] writeFile 00:04:18.154 [Pipeline] sh 00:04:18.433 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:18.448 [Pipeline] sh 00:04:18.727 + cat autorun-spdk.conf 00:04:18.727 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:18.727 SPDK_TEST_NVMF=1 00:04:18.727 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:18.727 SPDK_TEST_USDT=1 00:04:18.727 SPDK_TEST_NVMF_MDNS=1 00:04:18.727 SPDK_RUN_UBSAN=1 00:04:18.727 NET_TYPE=virt 00:04:18.727 SPDK_JSONRPC_GO_CLIENT=1 00:04:18.727 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:18.734 RUN_NIGHTLY=0 00:04:18.736 [Pipeline] } 00:04:18.757 [Pipeline] // stage 00:04:18.773 [Pipeline] stage 00:04:18.776 [Pipeline] { (Run VM) 00:04:18.790 [Pipeline] sh 00:04:19.070 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:19.070 + echo 'Start stage prepare_nvme.sh' 00:04:19.070 Start stage prepare_nvme.sh 00:04:19.070 + [[ -n 0 ]] 00:04:19.070 + disk_prefix=ex0 00:04:19.070 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:04:19.070 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:04:19.070 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:04:19.070 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:19.070 ++ SPDK_TEST_NVMF=1 00:04:19.070 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:19.070 ++ SPDK_TEST_USDT=1 00:04:19.070 ++ SPDK_TEST_NVMF_MDNS=1 00:04:19.070 ++ SPDK_RUN_UBSAN=1 00:04:19.070 ++ NET_TYPE=virt 00:04:19.070 ++ SPDK_JSONRPC_GO_CLIENT=1 00:04:19.070 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:19.070 ++ RUN_NIGHTLY=0 00:04:19.070 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:04:19.070 + nvme_files=() 00:04:19.070 + declare -A nvme_files 00:04:19.070 + backend_dir=/var/lib/libvirt/images/backends 00:04:19.070 + nvme_files['nvme.img']=5G 00:04:19.070 + nvme_files['nvme-cmb.img']=5G 00:04:19.070 + nvme_files['nvme-multi0.img']=4G 00:04:19.070 + nvme_files['nvme-multi1.img']=4G 00:04:19.070 + nvme_files['nvme-multi2.img']=4G 00:04:19.070 + nvme_files['nvme-openstack.img']=8G 00:04:19.070 + nvme_files['nvme-zns.img']=5G 00:04:19.070 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:19.070 + (( SPDK_TEST_FTL == 1 )) 00:04:19.070 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:19.070 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:19.070 + for nvme in "${!nvme_files[@]}" 00:04:19.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:04:19.070 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:19.070 + for nvme in "${!nvme_files[@]}" 00:04:19.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:04:19.070 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:19.070 + for nvme in "${!nvme_files[@]}" 00:04:19.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:04:19.070 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:19.070 + for nvme in "${!nvme_files[@]}" 00:04:19.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:04:19.070 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:19.070 + for nvme in "${!nvme_files[@]}" 00:04:19.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:04:19.070 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:19.070 + for nvme in "${!nvme_files[@]}" 00:04:19.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:04:19.070 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:19.070 + for nvme in "${!nvme_files[@]}" 00:04:19.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:04:19.329 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:19.329 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:04:19.329 + echo 'End stage prepare_nvme.sh' 00:04:19.329 End stage prepare_nvme.sh 00:04:19.340 [Pipeline] sh 00:04:19.618 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:19.618 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:04:19.618 00:04:19.618 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:04:19.618 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:04:19.618 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:04:19.618 HELP=0 00:04:19.618 DRY_RUN=0 00:04:19.618 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:04:19.618 NVME_DISKS_TYPE=nvme,nvme, 00:04:19.618 NVME_AUTO_CREATE=0 00:04:19.618 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:04:19.618 NVME_CMB=,, 00:04:19.618 NVME_PMR=,, 00:04:19.618 NVME_ZNS=,, 00:04:19.618 NVME_MS=,, 00:04:19.618 NVME_FDP=,, 00:04:19.618 SPDK_VAGRANT_DISTRO=fedora38 00:04:19.618 SPDK_VAGRANT_VMCPU=10 00:04:19.618 SPDK_VAGRANT_VMRAM=12288 00:04:19.618 SPDK_VAGRANT_PROVIDER=libvirt 00:04:19.618 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:19.618 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:19.618 SPDK_OPENSTACK_NETWORK=0 00:04:19.618 VAGRANT_PACKAGE_BOX=0 00:04:19.618 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:04:19.618 FORCE_DISTRO=true 00:04:19.618 VAGRANT_BOX_VERSION= 00:04:19.618 EXTRA_VAGRANTFILES= 00:04:19.618 NIC_MODEL=e1000 00:04:19.618 00:04:19.618 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:04:19.618 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:04:22.900 Bringing machine 'default' up with 'libvirt' provider... 00:04:23.467 ==> default: Creating image (snapshot of base box volume). 00:04:23.726 ==> default: Creating domain with the following settings... 00:04:23.726 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715762679_15a5a83357a7ee02b485 00:04:23.726 ==> default: -- Domain type: kvm 00:04:23.726 ==> default: -- Cpus: 10 00:04:23.726 ==> default: -- Feature: acpi 00:04:23.726 ==> default: -- Feature: apic 00:04:23.726 ==> default: -- Feature: pae 00:04:23.726 ==> default: -- Memory: 12288M 00:04:23.726 ==> default: -- Memory Backing: hugepages: 00:04:23.726 ==> default: -- Management MAC: 00:04:23.726 ==> default: -- Loader: 00:04:23.726 ==> default: -- Nvram: 00:04:23.726 ==> default: -- Base box: spdk/fedora38 00:04:23.726 ==> default: -- Storage pool: default 00:04:23.726 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715762679_15a5a83357a7ee02b485.img (20G) 00:04:23.726 ==> default: -- Volume Cache: default 00:04:23.726 ==> default: -- Kernel: 00:04:23.726 ==> default: -- Initrd: 00:04:23.726 ==> default: -- Graphics Type: vnc 00:04:23.726 ==> default: -- Graphics Port: -1 00:04:23.726 ==> default: -- Graphics IP: 127.0.0.1 00:04:23.726 ==> default: -- Graphics Password: Not defined 00:04:23.726 ==> default: -- Video Type: cirrus 00:04:23.726 ==> default: -- Video VRAM: 9216 00:04:23.726 ==> default: -- Sound Type: 00:04:23.726 ==> default: -- Keymap: en-us 00:04:23.726 ==> default: -- TPM Path: 00:04:23.726 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:23.726 ==> default: -- Command line args: 00:04:23.726 ==> default: -> value=-device, 00:04:23.726 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:23.726 ==> default: -> value=-drive, 00:04:23.726 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:04:23.726 ==> default: -> value=-device, 00:04:23.726 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:23.726 ==> default: -> value=-device, 00:04:23.726 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:23.726 ==> default: -> value=-drive, 00:04:23.726 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:23.726 ==> default: -> value=-device, 00:04:23.726 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:23.726 ==> default: -> value=-drive, 00:04:23.726 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:23.726 ==> default: -> value=-device, 00:04:23.726 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:23.726 ==> default: -> value=-drive, 00:04:23.726 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:23.726 ==> default: -> value=-device, 00:04:23.726 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:23.726 ==> default: Creating shared folders metadata... 00:04:23.726 ==> default: Starting domain. 00:04:25.104 ==> default: Waiting for domain to get an IP address... 00:04:47.033 ==> default: Waiting for SSH to become available... 00:04:47.033 ==> default: Configuring and enabling network interfaces... 00:04:48.409 default: SSH address: 192.168.121.93:22 00:04:48.409 default: SSH username: vagrant 00:04:48.409 default: SSH auth method: private key 00:04:50.935 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:59.066 ==> default: Mounting SSHFS shared folder... 00:05:00.442 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:05:00.442 ==> default: Checking Mount.. 00:05:01.377 ==> default: Folder Successfully Mounted! 00:05:01.377 ==> default: Running provisioner: file... 00:05:02.360 default: ~/.gitconfig => .gitconfig 00:05:02.619 00:05:02.619 SUCCESS! 00:05:02.619 00:05:02.619 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:05:02.619 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:02.619 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:05:02.619 00:05:02.631 [Pipeline] } 00:05:02.649 [Pipeline] // stage 00:05:02.657 [Pipeline] dir 00:05:02.657 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:05:02.659 [Pipeline] { 00:05:02.671 [Pipeline] catchError 00:05:02.673 [Pipeline] { 00:05:02.713 [Pipeline] sh 00:05:02.995 + vagrant ssh-config --host vagrant 00:05:02.995 + sed -ne /^Host/,$p 00:05:02.995 + tee ssh_conf 00:05:07.178 Host vagrant 00:05:07.178 HostName 192.168.121.93 00:05:07.178 User vagrant 00:05:07.178 Port 22 00:05:07.178 UserKnownHostsFile /dev/null 00:05:07.178 StrictHostKeyChecking no 00:05:07.178 PasswordAuthentication no 00:05:07.178 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:05:07.178 IdentitiesOnly yes 00:05:07.178 LogLevel FATAL 00:05:07.178 ForwardAgent yes 00:05:07.178 ForwardX11 yes 00:05:07.178 00:05:07.191 [Pipeline] withEnv 00:05:07.192 [Pipeline] { 00:05:07.208 [Pipeline] sh 00:05:07.484 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:07.484 source /etc/os-release 00:05:07.484 [[ -e /image.version ]] && img=$(< /image.version) 00:05:07.484 # Minimal, systemd-like check. 00:05:07.484 if [[ -e /.dockerenv ]]; then 00:05:07.484 # Clear garbage from the node's name: 00:05:07.484 # agt-er_autotest_547-896 -> autotest_547-896 00:05:07.484 # $HOSTNAME is the actual container id 00:05:07.484 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:07.484 if mountpoint -q /etc/hostname; then 00:05:07.484 # We can assume this is a mount from a host where container is running, 00:05:07.484 # so fetch its hostname to easily identify the target swarm worker. 00:05:07.484 container="$(< /etc/hostname) ($agent)" 00:05:07.484 else 00:05:07.484 # Fallback 00:05:07.484 container=$agent 00:05:07.484 fi 00:05:07.484 fi 00:05:07.484 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:07.484 00:05:07.752 [Pipeline] } 00:05:07.772 [Pipeline] // withEnv 00:05:07.780 [Pipeline] setCustomBuildProperty 00:05:07.789 [Pipeline] stage 00:05:07.791 [Pipeline] { (Tests) 00:05:07.804 [Pipeline] sh 00:05:08.079 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:08.093 [Pipeline] timeout 00:05:08.093 Timeout set to expire in 40 min 00:05:08.095 [Pipeline] { 00:05:08.112 [Pipeline] sh 00:05:08.427 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:08.994 HEAD is now at 08ee631f2 [TEST] autotest: collect nvmf coverage 00:05:09.006 [Pipeline] sh 00:05:09.285 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:09.556 [Pipeline] sh 00:05:09.838 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:09.854 [Pipeline] sh 00:05:10.133 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:05:10.391 ++ readlink -f spdk_repo 00:05:10.391 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:10.391 + [[ -n /home/vagrant/spdk_repo ]] 00:05:10.391 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:10.391 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:10.391 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:10.391 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:10.391 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:10.391 + cd /home/vagrant/spdk_repo 00:05:10.391 + source /etc/os-release 00:05:10.391 ++ NAME='Fedora Linux' 00:05:10.391 ++ VERSION='38 (Cloud Edition)' 00:05:10.391 ++ ID=fedora 00:05:10.391 ++ VERSION_ID=38 00:05:10.391 ++ VERSION_CODENAME= 00:05:10.391 ++ PLATFORM_ID=platform:f38 00:05:10.391 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:05:10.391 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:10.391 ++ LOGO=fedora-logo-icon 00:05:10.391 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:05:10.391 ++ HOME_URL=https://fedoraproject.org/ 00:05:10.391 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:05:10.391 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:10.391 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:10.391 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:10.391 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:05:10.391 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:10.391 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:05:10.391 ++ SUPPORT_END=2024-05-14 00:05:10.391 ++ VARIANT='Cloud Edition' 00:05:10.391 ++ VARIANT_ID=cloud 00:05:10.391 + uname -a 00:05:10.391 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:05:10.391 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:10.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.959 Hugepages 00:05:10.959 node hugesize free / total 00:05:10.959 node0 1048576kB 0 / 0 00:05:10.959 node0 2048kB 0 / 0 00:05:10.959 00:05:10.959 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.959 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:10.959 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:10.959 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:10.959 + rm -f /tmp/spdk-ld-path 00:05:10.959 + source autorun-spdk.conf 00:05:10.959 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:10.959 ++ SPDK_TEST_NVMF=1 00:05:10.959 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:10.959 ++ SPDK_TEST_USDT=1 00:05:10.959 ++ SPDK_TEST_NVMF_MDNS=1 00:05:10.959 ++ SPDK_RUN_UBSAN=1 00:05:10.959 ++ NET_TYPE=virt 00:05:10.959 ++ SPDK_JSONRPC_GO_CLIENT=1 00:05:10.959 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:10.959 ++ RUN_NIGHTLY=0 00:05:10.959 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:10.959 + [[ -n '' ]] 00:05:10.959 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:10.959 + for M in /var/spdk/build-*-manifest.txt 00:05:10.959 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:10.959 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:10.959 + for M in /var/spdk/build-*-manifest.txt 00:05:10.959 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:10.959 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:10.959 ++ uname 00:05:10.959 + [[ Linux == \L\i\n\u\x ]] 00:05:10.959 + sudo dmesg -T 00:05:10.959 + sudo dmesg --clear 00:05:10.959 + dmesg_pid=5151 00:05:10.959 + [[ Fedora Linux == FreeBSD ]] 00:05:10.959 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:10.959 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:10.959 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:10.959 + sudo dmesg -Tw 00:05:10.959 + [[ -x /usr/src/fio-static/fio ]] 00:05:10.959 + export FIO_BIN=/usr/src/fio-static/fio 00:05:10.959 + FIO_BIN=/usr/src/fio-static/fio 00:05:10.959 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:10.959 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:10.959 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:10.959 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:10.959 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:10.959 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:10.959 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:10.959 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:10.959 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:10.959 Test configuration: 00:05:10.959 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:10.959 SPDK_TEST_NVMF=1 00:05:10.959 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:10.959 SPDK_TEST_USDT=1 00:05:10.959 SPDK_TEST_NVMF_MDNS=1 00:05:10.959 SPDK_RUN_UBSAN=1 00:05:10.959 NET_TYPE=virt 00:05:10.959 SPDK_JSONRPC_GO_CLIENT=1 00:05:10.959 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:11.218 RUN_NIGHTLY=0 08:45:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:11.218 08:45:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:11.218 08:45:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.218 08:45:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.218 08:45:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.218 08:45:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.218 08:45:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.218 08:45:27 -- paths/export.sh@5 -- $ export PATH 00:05:11.218 08:45:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.218 08:45:27 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:11.218 08:45:27 -- common/autobuild_common.sh@437 -- $ date +%s 00:05:11.218 08:45:27 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715762727.XXXXXX 00:05:11.218 08:45:27 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715762727.U3oJGc 00:05:11.218 08:45:27 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:05:11.218 08:45:27 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:05:11.218 08:45:27 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:11.218 08:45:27 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:11.218 08:45:27 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:11.218 08:45:27 -- common/autobuild_common.sh@453 -- $ get_config_params 00:05:11.218 08:45:27 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:05:11.218 08:45:27 -- common/autotest_common.sh@10 -- $ set +x 00:05:11.218 08:45:27 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:05:11.218 08:45:27 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:05:11.218 08:45:27 -- pm/common@17 -- $ local monitor 00:05:11.218 08:45:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.218 08:45:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.218 08:45:27 -- pm/common@25 -- $ sleep 1 00:05:11.218 08:45:27 -- pm/common@21 -- $ date +%s 00:05:11.218 08:45:27 -- pm/common@21 -- $ date +%s 00:05:11.218 08:45:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715762727 00:05:11.218 08:45:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715762727 00:05:11.218 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715762727_collect-vmstat.pm.log 00:05:11.218 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715762727_collect-cpu-load.pm.log 00:05:12.151 08:45:28 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:05:12.151 08:45:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:12.151 08:45:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:12.151 08:45:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:12.151 08:45:28 -- spdk/autobuild.sh@16 -- $ date -u 00:05:12.151 Wed May 15 08:45:28 AM UTC 2024 00:05:12.151 08:45:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:12.151 v24.05-pre-615-g08ee631f2 00:05:12.151 08:45:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:12.151 08:45:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:12.151 08:45:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:12.151 08:45:28 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:05:12.151 08:45:28 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:05:12.151 08:45:28 -- common/autotest_common.sh@10 -- $ set +x 00:05:12.151 ************************************ 00:05:12.151 START TEST ubsan 00:05:12.151 ************************************ 00:05:12.151 using ubsan 00:05:12.151 08:45:28 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:05:12.151 00:05:12.151 real 0m0.000s 00:05:12.151 user 0m0.000s 00:05:12.151 sys 0m0.000s 00:05:12.151 08:45:28 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:05:12.151 ************************************ 00:05:12.151 END TEST ubsan 00:05:12.151 ************************************ 00:05:12.151 08:45:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:12.151 08:45:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:12.151 08:45:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:12.151 08:45:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:12.151 08:45:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:12.151 08:45:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:12.151 08:45:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:12.151 08:45:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:12.151 08:45:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:12.151 08:45:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:05:12.409 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:12.409 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:12.666 Using 'verbs' RDMA provider 00:05:25.835 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:38.046 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:38.046 go version go1.21.1 linux/amd64 00:05:38.046 Creating mk/config.mk...done. 00:05:38.046 Creating mk/cc.flags.mk...done. 00:05:38.046 Type 'make' to build. 00:05:38.046 08:45:54 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:05:38.046 08:45:54 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:05:38.046 08:45:54 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:05:38.046 08:45:54 -- common/autotest_common.sh@10 -- $ set +x 00:05:38.046 ************************************ 00:05:38.046 START TEST make 00:05:38.046 ************************************ 00:05:38.046 08:45:54 make -- common/autotest_common.sh@1121 -- $ make -j10 00:05:38.305 make[1]: Nothing to be done for 'all'. 00:05:56.381 The Meson build system 00:05:56.381 Version: 1.3.1 00:05:56.381 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:56.381 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:56.381 Build type: native build 00:05:56.381 Program cat found: YES (/usr/bin/cat) 00:05:56.381 Project name: DPDK 00:05:56.381 Project version: 23.11.0 00:05:56.381 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:56.381 C linker for the host machine: cc ld.bfd 2.39-16 00:05:56.381 Host machine cpu family: x86_64 00:05:56.381 Host machine cpu: x86_64 00:05:56.381 Message: ## Building in Developer Mode ## 00:05:56.381 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:56.381 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:56.381 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:56.381 Program python3 found: YES (/usr/bin/python3) 00:05:56.381 Program cat found: YES (/usr/bin/cat) 00:05:56.381 Compiler for C supports arguments -march=native: YES 00:05:56.381 Checking for size of "void *" : 8 00:05:56.381 Checking for size of "void *" : 8 (cached) 00:05:56.381 Library m found: YES 00:05:56.381 Library numa found: YES 00:05:56.381 Has header "numaif.h" : YES 00:05:56.381 Library fdt found: NO 00:05:56.381 Library execinfo found: NO 00:05:56.381 Has header "execinfo.h" : YES 00:05:56.381 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:56.381 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:56.381 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:56.381 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:56.381 Run-time dependency openssl found: YES 3.0.9 00:05:56.381 Run-time dependency libpcap found: YES 1.10.4 00:05:56.381 Has header "pcap.h" with dependency libpcap: YES 00:05:56.381 Compiler for C supports arguments -Wcast-qual: YES 00:05:56.381 Compiler for C supports arguments -Wdeprecated: YES 00:05:56.381 Compiler for C supports arguments -Wformat: YES 00:05:56.381 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:56.381 Compiler for C supports arguments -Wformat-security: NO 00:05:56.381 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:56.381 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:56.381 Compiler for C supports arguments -Wnested-externs: YES 00:05:56.381 Compiler for C supports arguments -Wold-style-definition: YES 00:05:56.381 Compiler for C supports arguments -Wpointer-arith: YES 00:05:56.381 Compiler for C supports arguments -Wsign-compare: YES 00:05:56.381 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:56.381 Compiler for C supports arguments -Wundef: YES 00:05:56.381 Compiler for C supports arguments -Wwrite-strings: YES 00:05:56.381 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:56.381 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:56.381 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:56.381 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:56.381 Program objdump found: YES (/usr/bin/objdump) 00:05:56.381 Compiler for C supports arguments -mavx512f: YES 00:05:56.381 Checking if "AVX512 checking" compiles: YES 00:05:56.381 Fetching value of define "__SSE4_2__" : 1 00:05:56.381 Fetching value of define "__AES__" : 1 00:05:56.381 Fetching value of define "__AVX__" : 1 00:05:56.381 Fetching value of define "__AVX2__" : 1 00:05:56.381 Fetching value of define "__AVX512BW__" : (undefined) 00:05:56.381 Fetching value of define "__AVX512CD__" : (undefined) 00:05:56.381 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:56.381 Fetching value of define "__AVX512F__" : (undefined) 00:05:56.381 Fetching value of define "__AVX512VL__" : (undefined) 00:05:56.381 Fetching value of define "__PCLMUL__" : 1 00:05:56.382 Fetching value of define "__RDRND__" : 1 00:05:56.382 Fetching value of define "__RDSEED__" : 1 00:05:56.382 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:56.382 Fetching value of define "__znver1__" : (undefined) 00:05:56.382 Fetching value of define "__znver2__" : (undefined) 00:05:56.382 Fetching value of define "__znver3__" : (undefined) 00:05:56.382 Fetching value of define "__znver4__" : (undefined) 00:05:56.382 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:56.382 Message: lib/log: Defining dependency "log" 00:05:56.382 Message: lib/kvargs: Defining dependency "kvargs" 00:05:56.382 Message: lib/telemetry: Defining dependency "telemetry" 00:05:56.382 Checking for function "getentropy" : NO 00:05:56.382 Message: lib/eal: Defining dependency "eal" 00:05:56.382 Message: lib/ring: Defining dependency "ring" 00:05:56.382 Message: lib/rcu: Defining dependency "rcu" 00:05:56.382 Message: lib/mempool: Defining dependency "mempool" 00:05:56.382 Message: lib/mbuf: Defining dependency "mbuf" 00:05:56.382 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:56.382 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:56.382 Compiler for C supports arguments -mpclmul: YES 00:05:56.382 Compiler for C supports arguments -maes: YES 00:05:56.382 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:56.382 Compiler for C supports arguments -mavx512bw: YES 00:05:56.382 Compiler for C supports arguments -mavx512dq: YES 00:05:56.382 Compiler for C supports arguments -mavx512vl: YES 00:05:56.382 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:56.382 Compiler for C supports arguments -mavx2: YES 00:05:56.382 Compiler for C supports arguments -mavx: YES 00:05:56.382 Message: lib/net: Defining dependency "net" 00:05:56.382 Message: lib/meter: Defining dependency "meter" 00:05:56.382 Message: lib/ethdev: Defining dependency "ethdev" 00:05:56.382 Message: lib/pci: Defining dependency "pci" 00:05:56.382 Message: lib/cmdline: Defining dependency "cmdline" 00:05:56.382 Message: lib/hash: Defining dependency "hash" 00:05:56.382 Message: lib/timer: Defining dependency "timer" 00:05:56.382 Message: lib/compressdev: Defining dependency "compressdev" 00:05:56.382 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:56.382 Message: lib/dmadev: Defining dependency "dmadev" 00:05:56.382 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:56.382 Message: lib/power: Defining dependency "power" 00:05:56.382 Message: lib/reorder: Defining dependency "reorder" 00:05:56.382 Message: lib/security: Defining dependency "security" 00:05:56.382 Has header "linux/userfaultfd.h" : YES 00:05:56.382 Has header "linux/vduse.h" : YES 00:05:56.382 Message: lib/vhost: Defining dependency "vhost" 00:05:56.382 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:56.382 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:56.382 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:56.382 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:56.382 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:56.382 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:56.382 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:56.382 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:56.382 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:56.382 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:56.382 Program doxygen found: YES (/usr/bin/doxygen) 00:05:56.382 Configuring doxy-api-html.conf using configuration 00:05:56.382 Configuring doxy-api-man.conf using configuration 00:05:56.382 Program mandb found: YES (/usr/bin/mandb) 00:05:56.382 Program sphinx-build found: NO 00:05:56.382 Configuring rte_build_config.h using configuration 00:05:56.382 Message: 00:05:56.382 ================= 00:05:56.382 Applications Enabled 00:05:56.382 ================= 00:05:56.382 00:05:56.382 apps: 00:05:56.382 00:05:56.382 00:05:56.382 Message: 00:05:56.382 ================= 00:05:56.382 Libraries Enabled 00:05:56.382 ================= 00:05:56.382 00:05:56.382 libs: 00:05:56.382 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:56.382 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:56.382 cryptodev, dmadev, power, reorder, security, vhost, 00:05:56.382 00:05:56.382 Message: 00:05:56.382 =============== 00:05:56.382 Drivers Enabled 00:05:56.382 =============== 00:05:56.382 00:05:56.382 common: 00:05:56.382 00:05:56.382 bus: 00:05:56.382 pci, vdev, 00:05:56.382 mempool: 00:05:56.382 ring, 00:05:56.382 dma: 00:05:56.382 00:05:56.382 net: 00:05:56.382 00:05:56.382 crypto: 00:05:56.382 00:05:56.382 compress: 00:05:56.382 00:05:56.382 vdpa: 00:05:56.382 00:05:56.382 00:05:56.382 Message: 00:05:56.382 ================= 00:05:56.382 Content Skipped 00:05:56.382 ================= 00:05:56.382 00:05:56.382 apps: 00:05:56.382 dumpcap: explicitly disabled via build config 00:05:56.382 graph: explicitly disabled via build config 00:05:56.382 pdump: explicitly disabled via build config 00:05:56.382 proc-info: explicitly disabled via build config 00:05:56.382 test-acl: explicitly disabled via build config 00:05:56.382 test-bbdev: explicitly disabled via build config 00:05:56.382 test-cmdline: explicitly disabled via build config 00:05:56.382 test-compress-perf: explicitly disabled via build config 00:05:56.382 test-crypto-perf: explicitly disabled via build config 00:05:56.382 test-dma-perf: explicitly disabled via build config 00:05:56.382 test-eventdev: explicitly disabled via build config 00:05:56.382 test-fib: explicitly disabled via build config 00:05:56.382 test-flow-perf: explicitly disabled via build config 00:05:56.382 test-gpudev: explicitly disabled via build config 00:05:56.382 test-mldev: explicitly disabled via build config 00:05:56.382 test-pipeline: explicitly disabled via build config 00:05:56.382 test-pmd: explicitly disabled via build config 00:05:56.382 test-regex: explicitly disabled via build config 00:05:56.382 test-sad: explicitly disabled via build config 00:05:56.382 test-security-perf: explicitly disabled via build config 00:05:56.382 00:05:56.382 libs: 00:05:56.382 metrics: explicitly disabled via build config 00:05:56.382 acl: explicitly disabled via build config 00:05:56.382 bbdev: explicitly disabled via build config 00:05:56.382 bitratestats: explicitly disabled via build config 00:05:56.382 bpf: explicitly disabled via build config 00:05:56.382 cfgfile: explicitly disabled via build config 00:05:56.382 distributor: explicitly disabled via build config 00:05:56.382 efd: explicitly disabled via build config 00:05:56.382 eventdev: explicitly disabled via build config 00:05:56.382 dispatcher: explicitly disabled via build config 00:05:56.382 gpudev: explicitly disabled via build config 00:05:56.382 gro: explicitly disabled via build config 00:05:56.382 gso: explicitly disabled via build config 00:05:56.382 ip_frag: explicitly disabled via build config 00:05:56.382 jobstats: explicitly disabled via build config 00:05:56.382 latencystats: explicitly disabled via build config 00:05:56.382 lpm: explicitly disabled via build config 00:05:56.382 member: explicitly disabled via build config 00:05:56.382 pcapng: explicitly disabled via build config 00:05:56.382 rawdev: explicitly disabled via build config 00:05:56.382 regexdev: explicitly disabled via build config 00:05:56.382 mldev: explicitly disabled via build config 00:05:56.382 rib: explicitly disabled via build config 00:05:56.382 sched: explicitly disabled via build config 00:05:56.382 stack: explicitly disabled via build config 00:05:56.382 ipsec: explicitly disabled via build config 00:05:56.382 pdcp: explicitly disabled via build config 00:05:56.382 fib: explicitly disabled via build config 00:05:56.382 port: explicitly disabled via build config 00:05:56.382 pdump: explicitly disabled via build config 00:05:56.382 table: explicitly disabled via build config 00:05:56.382 pipeline: explicitly disabled via build config 00:05:56.382 graph: explicitly disabled via build config 00:05:56.382 node: explicitly disabled via build config 00:05:56.382 00:05:56.382 drivers: 00:05:56.382 common/cpt: not in enabled drivers build config 00:05:56.382 common/dpaax: not in enabled drivers build config 00:05:56.382 common/iavf: not in enabled drivers build config 00:05:56.382 common/idpf: not in enabled drivers build config 00:05:56.382 common/mvep: not in enabled drivers build config 00:05:56.382 common/octeontx: not in enabled drivers build config 00:05:56.382 bus/auxiliary: not in enabled drivers build config 00:05:56.382 bus/cdx: not in enabled drivers build config 00:05:56.382 bus/dpaa: not in enabled drivers build config 00:05:56.382 bus/fslmc: not in enabled drivers build config 00:05:56.382 bus/ifpga: not in enabled drivers build config 00:05:56.382 bus/platform: not in enabled drivers build config 00:05:56.382 bus/vmbus: not in enabled drivers build config 00:05:56.382 common/cnxk: not in enabled drivers build config 00:05:56.382 common/mlx5: not in enabled drivers build config 00:05:56.382 common/nfp: not in enabled drivers build config 00:05:56.382 common/qat: not in enabled drivers build config 00:05:56.382 common/sfc_efx: not in enabled drivers build config 00:05:56.382 mempool/bucket: not in enabled drivers build config 00:05:56.382 mempool/cnxk: not in enabled drivers build config 00:05:56.382 mempool/dpaa: not in enabled drivers build config 00:05:56.382 mempool/dpaa2: not in enabled drivers build config 00:05:56.382 mempool/octeontx: not in enabled drivers build config 00:05:56.382 mempool/stack: not in enabled drivers build config 00:05:56.382 dma/cnxk: not in enabled drivers build config 00:05:56.382 dma/dpaa: not in enabled drivers build config 00:05:56.382 dma/dpaa2: not in enabled drivers build config 00:05:56.382 dma/hisilicon: not in enabled drivers build config 00:05:56.382 dma/idxd: not in enabled drivers build config 00:05:56.382 dma/ioat: not in enabled drivers build config 00:05:56.382 dma/skeleton: not in enabled drivers build config 00:05:56.382 net/af_packet: not in enabled drivers build config 00:05:56.382 net/af_xdp: not in enabled drivers build config 00:05:56.382 net/ark: not in enabled drivers build config 00:05:56.382 net/atlantic: not in enabled drivers build config 00:05:56.382 net/avp: not in enabled drivers build config 00:05:56.382 net/axgbe: not in enabled drivers build config 00:05:56.382 net/bnx2x: not in enabled drivers build config 00:05:56.382 net/bnxt: not in enabled drivers build config 00:05:56.383 net/bonding: not in enabled drivers build config 00:05:56.383 net/cnxk: not in enabled drivers build config 00:05:56.383 net/cpfl: not in enabled drivers build config 00:05:56.383 net/cxgbe: not in enabled drivers build config 00:05:56.383 net/dpaa: not in enabled drivers build config 00:05:56.383 net/dpaa2: not in enabled drivers build config 00:05:56.383 net/e1000: not in enabled drivers build config 00:05:56.383 net/ena: not in enabled drivers build config 00:05:56.383 net/enetc: not in enabled drivers build config 00:05:56.383 net/enetfec: not in enabled drivers build config 00:05:56.383 net/enic: not in enabled drivers build config 00:05:56.383 net/failsafe: not in enabled drivers build config 00:05:56.383 net/fm10k: not in enabled drivers build config 00:05:56.383 net/gve: not in enabled drivers build config 00:05:56.383 net/hinic: not in enabled drivers build config 00:05:56.383 net/hns3: not in enabled drivers build config 00:05:56.383 net/i40e: not in enabled drivers build config 00:05:56.383 net/iavf: not in enabled drivers build config 00:05:56.383 net/ice: not in enabled drivers build config 00:05:56.383 net/idpf: not in enabled drivers build config 00:05:56.383 net/igc: not in enabled drivers build config 00:05:56.383 net/ionic: not in enabled drivers build config 00:05:56.383 net/ipn3ke: not in enabled drivers build config 00:05:56.383 net/ixgbe: not in enabled drivers build config 00:05:56.383 net/mana: not in enabled drivers build config 00:05:56.383 net/memif: not in enabled drivers build config 00:05:56.383 net/mlx4: not in enabled drivers build config 00:05:56.383 net/mlx5: not in enabled drivers build config 00:05:56.383 net/mvneta: not in enabled drivers build config 00:05:56.383 net/mvpp2: not in enabled drivers build config 00:05:56.383 net/netvsc: not in enabled drivers build config 00:05:56.383 net/nfb: not in enabled drivers build config 00:05:56.383 net/nfp: not in enabled drivers build config 00:05:56.383 net/ngbe: not in enabled drivers build config 00:05:56.383 net/null: not in enabled drivers build config 00:05:56.383 net/octeontx: not in enabled drivers build config 00:05:56.383 net/octeon_ep: not in enabled drivers build config 00:05:56.383 net/pcap: not in enabled drivers build config 00:05:56.383 net/pfe: not in enabled drivers build config 00:05:56.383 net/qede: not in enabled drivers build config 00:05:56.383 net/ring: not in enabled drivers build config 00:05:56.383 net/sfc: not in enabled drivers build config 00:05:56.383 net/softnic: not in enabled drivers build config 00:05:56.383 net/tap: not in enabled drivers build config 00:05:56.383 net/thunderx: not in enabled drivers build config 00:05:56.383 net/txgbe: not in enabled drivers build config 00:05:56.383 net/vdev_netvsc: not in enabled drivers build config 00:05:56.383 net/vhost: not in enabled drivers build config 00:05:56.383 net/virtio: not in enabled drivers build config 00:05:56.383 net/vmxnet3: not in enabled drivers build config 00:05:56.383 raw/*: missing internal dependency, "rawdev" 00:05:56.383 crypto/armv8: not in enabled drivers build config 00:05:56.383 crypto/bcmfs: not in enabled drivers build config 00:05:56.383 crypto/caam_jr: not in enabled drivers build config 00:05:56.383 crypto/ccp: not in enabled drivers build config 00:05:56.383 crypto/cnxk: not in enabled drivers build config 00:05:56.383 crypto/dpaa_sec: not in enabled drivers build config 00:05:56.383 crypto/dpaa2_sec: not in enabled drivers build config 00:05:56.383 crypto/ipsec_mb: not in enabled drivers build config 00:05:56.383 crypto/mlx5: not in enabled drivers build config 00:05:56.383 crypto/mvsam: not in enabled drivers build config 00:05:56.383 crypto/nitrox: not in enabled drivers build config 00:05:56.383 crypto/null: not in enabled drivers build config 00:05:56.383 crypto/octeontx: not in enabled drivers build config 00:05:56.383 crypto/openssl: not in enabled drivers build config 00:05:56.383 crypto/scheduler: not in enabled drivers build config 00:05:56.383 crypto/uadk: not in enabled drivers build config 00:05:56.383 crypto/virtio: not in enabled drivers build config 00:05:56.383 compress/isal: not in enabled drivers build config 00:05:56.383 compress/mlx5: not in enabled drivers build config 00:05:56.383 compress/octeontx: not in enabled drivers build config 00:05:56.383 compress/zlib: not in enabled drivers build config 00:05:56.383 regex/*: missing internal dependency, "regexdev" 00:05:56.383 ml/*: missing internal dependency, "mldev" 00:05:56.383 vdpa/ifc: not in enabled drivers build config 00:05:56.383 vdpa/mlx5: not in enabled drivers build config 00:05:56.383 vdpa/nfp: not in enabled drivers build config 00:05:56.383 vdpa/sfc: not in enabled drivers build config 00:05:56.383 event/*: missing internal dependency, "eventdev" 00:05:56.383 baseband/*: missing internal dependency, "bbdev" 00:05:56.383 gpu/*: missing internal dependency, "gpudev" 00:05:56.383 00:05:56.383 00:05:56.383 Build targets in project: 85 00:05:56.383 00:05:56.383 DPDK 23.11.0 00:05:56.383 00:05:56.383 User defined options 00:05:56.383 buildtype : debug 00:05:56.383 default_library : shared 00:05:56.383 libdir : lib 00:05:56.383 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:56.383 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:56.383 c_link_args : 00:05:56.383 cpu_instruction_set: native 00:05:56.383 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:56.383 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:56.383 enable_docs : false 00:05:56.383 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:56.383 enable_kmods : false 00:05:56.383 tests : false 00:05:56.383 00:05:56.383 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:56.383 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:56.383 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:56.383 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:56.383 [3/265] Linking static target lib/librte_kvargs.a 00:05:56.383 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:56.383 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:56.383 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:56.383 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:56.383 [8/265] Linking static target lib/librte_log.a 00:05:56.383 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:56.383 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:56.383 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.383 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:56.383 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:56.383 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:56.383 [15/265] Linking static target lib/librte_telemetry.a 00:05:56.383 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:56.383 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:56.383 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:56.383 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.383 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:56.383 [21/265] Linking target lib/librte_log.so.24.0 00:05:56.383 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:56.641 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:56.641 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:56.900 [25/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:05:56.900 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:56.900 [27/265] Linking target lib/librte_kvargs.so.24.0 00:05:56.900 [28/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.900 [29/265] Linking target lib/librte_telemetry.so.24.0 00:05:57.158 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:57.158 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:57.158 [32/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:05:57.158 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:57.158 [34/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:05:57.158 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:57.416 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:57.416 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:57.416 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:57.416 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:57.675 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:57.675 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:57.675 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:57.675 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:57.675 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:57.933 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:58.192 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:58.192 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:58.450 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:58.450 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:58.450 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:58.450 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:58.709 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:58.709 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:58.709 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:58.709 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:58.709 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:58.967 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:58.967 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:59.225 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:59.225 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:59.225 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:59.483 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:59.483 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:59.483 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:59.483 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:59.483 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:59.742 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:59.742 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:59.742 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:00.001 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:00.001 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:00.001 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:00.001 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:00.001 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:00.001 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:00.259 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:00.259 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:00.518 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:00.518 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:00.518 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:00.518 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:00.776 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:00.776 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:00.776 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:00.776 [85/265] Linking static target lib/librte_eal.a 00:06:00.776 [86/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:00.776 [87/265] Linking static target lib/librte_rcu.a 00:06:01.034 [88/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:01.034 [89/265] Linking static target lib/librte_ring.a 00:06:01.034 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:01.293 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:01.293 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:01.293 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:01.293 [94/265] Linking static target lib/librte_mempool.a 00:06:01.551 [95/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:01.551 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:01.551 [97/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:01.551 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:01.551 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:01.809 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:01.809 [101/265] Linking static target lib/librte_mbuf.a 00:06:02.067 [102/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:02.067 [103/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:02.067 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:02.067 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:02.067 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:02.326 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:02.326 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:02.326 [109/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:02.326 [110/265] Linking static target lib/librte_meter.a 00:06:02.326 [111/265] Linking static target lib/librte_net.a 00:06:02.584 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.584 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:02.842 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:02.842 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.842 [116/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.842 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.842 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:03.101 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:03.359 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:03.618 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:03.618 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:03.875 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:03.875 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:03.875 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:03.875 [126/265] Linking static target lib/librte_pci.a 00:06:03.875 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:03.875 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:04.133 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:04.133 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:04.133 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:04.391 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:04.391 [133/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.391 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:04.391 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:04.391 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:04.649 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:04.649 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:04.649 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:04.649 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:04.649 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:04.649 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:04.649 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:04.907 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:04.907 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:04.907 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:04.907 [147/265] Linking static target lib/librte_cmdline.a 00:06:04.907 [148/265] Linking static target lib/librte_ethdev.a 00:06:05.165 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:05.423 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:05.423 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:05.681 [152/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:05.681 [153/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:05.681 [154/265] Linking static target lib/librte_timer.a 00:06:05.682 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:05.682 [156/265] Linking static target lib/librte_compressdev.a 00:06:05.940 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:05.940 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:06.198 [159/265] Linking static target lib/librte_hash.a 00:06:06.455 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:06.455 [161/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:06.455 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:06.455 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:06.712 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:06.712 [165/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:06.712 [166/265] Linking static target lib/librte_cryptodev.a 00:06:06.712 [167/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:06.712 [168/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:06.712 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:06.712 [170/265] Linking static target lib/librte_dmadev.a 00:06:06.712 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:06.970 [172/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:07.227 [173/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:07.227 [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.227 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:07.227 [176/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:07.485 [177/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:07.485 [178/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.485 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:07.485 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:07.743 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:08.002 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:08.002 [183/265] Linking static target lib/librte_power.a 00:06:08.002 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:08.002 [185/265] Linking static target lib/librte_reorder.a 00:06:08.002 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:08.002 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:08.002 [188/265] Linking static target lib/librte_security.a 00:06:08.262 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:08.262 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:08.520 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:08.520 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:08.778 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:08.778 [194/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:08.778 [195/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:08.778 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:09.036 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:09.294 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:09.294 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:09.294 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:09.294 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:09.552 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:09.552 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:09.552 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:09.811 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:09.811 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:09.811 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:10.069 [208/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:10.069 [209/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:10.069 [210/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:10.069 [211/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:10.069 [212/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:10.328 [213/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:10.328 [214/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:10.328 [215/265] Linking static target drivers/librte_bus_vdev.a 00:06:10.328 [216/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:10.328 [217/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:10.328 [218/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:10.328 [219/265] Linking static target drivers/librte_bus_pci.a 00:06:10.586 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:10.586 [221/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:10.586 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:10.586 [223/265] Linking static target drivers/librte_mempool_ring.a 00:06:10.586 [224/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:10.844 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.409 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:11.409 [227/265] Linking static target lib/librte_vhost.a 00:06:11.975 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.975 [229/265] Linking target lib/librte_eal.so.24.0 00:06:12.233 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:06:12.233 [231/265] Linking target drivers/librte_bus_vdev.so.24.0 00:06:12.234 [232/265] Linking target lib/librte_ring.so.24.0 00:06:12.234 [233/265] Linking target lib/librte_pci.so.24.0 00:06:12.234 [234/265] Linking target lib/librte_meter.so.24.0 00:06:12.234 [235/265] Linking target lib/librte_timer.so.24.0 00:06:12.234 [236/265] Linking target lib/librte_dmadev.so.24.0 00:06:12.514 [237/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.514 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:06:12.514 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:06:12.514 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:06:12.514 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:06:12.514 [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:06:12.514 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:06:12.514 [244/265] Linking target lib/librte_rcu.so.24.0 00:06:12.514 [245/265] Linking target lib/librte_mempool.so.24.0 00:06:12.514 [246/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.772 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:06:12.772 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:06:12.772 [249/265] Linking target lib/librte_mbuf.so.24.0 00:06:12.772 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:06:13.030 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:06:13.030 [252/265] Linking target lib/librte_net.so.24.0 00:06:13.030 [253/265] Linking target lib/librte_reorder.so.24.0 00:06:13.030 [254/265] Linking target lib/librte_compressdev.so.24.0 00:06:13.030 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:06:13.030 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:06:13.289 [257/265] Linking target lib/librte_cmdline.so.24.0 00:06:13.289 [258/265] Linking target lib/librte_hash.so.24.0 00:06:13.289 [259/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:06:13.289 [260/265] Linking target lib/librte_ethdev.so.24.0 00:06:13.289 [261/265] Linking target lib/librte_security.so.24.0 00:06:13.289 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:06:13.289 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:06:13.289 [264/265] Linking target lib/librte_power.so.24.0 00:06:13.547 [265/265] Linking target lib/librte_vhost.so.24.0 00:06:13.547 INFO: autodetecting backend as ninja 00:06:13.547 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:14.484 CC lib/ut/ut.o 00:06:14.484 CC lib/log/log_flags.o 00:06:14.484 CC lib/log/log.o 00:06:14.484 CC lib/log/log_deprecated.o 00:06:14.484 CC lib/ut_mock/mock.o 00:06:14.742 LIB libspdk_ut_mock.a 00:06:14.742 LIB libspdk_ut.a 00:06:14.742 LIB libspdk_log.a 00:06:14.742 SO libspdk_ut_mock.so.6.0 00:06:14.742 SO libspdk_ut.so.2.0 00:06:14.742 SO libspdk_log.so.7.0 00:06:14.742 SYMLINK libspdk_ut_mock.so 00:06:14.742 SYMLINK libspdk_ut.so 00:06:14.742 SYMLINK libspdk_log.so 00:06:15.000 CC lib/ioat/ioat.o 00:06:15.000 CC lib/util/base64.o 00:06:15.000 CXX lib/trace_parser/trace.o 00:06:15.000 CC lib/util/bit_array.o 00:06:15.000 CC lib/util/crc16.o 00:06:15.000 CC lib/util/cpuset.o 00:06:15.000 CC lib/util/crc32.o 00:06:15.000 CC lib/util/crc32c.o 00:06:15.000 CC lib/dma/dma.o 00:06:15.258 CC lib/vfio_user/host/vfio_user_pci.o 00:06:15.258 CC lib/util/crc32_ieee.o 00:06:15.258 CC lib/vfio_user/host/vfio_user.o 00:06:15.258 CC lib/util/crc64.o 00:06:15.258 CC lib/util/dif.o 00:06:15.258 LIB libspdk_dma.a 00:06:15.258 CC lib/util/fd.o 00:06:15.258 CC lib/util/file.o 00:06:15.258 SO libspdk_dma.so.4.0 00:06:15.258 SYMLINK libspdk_dma.so 00:06:15.515 CC lib/util/hexlify.o 00:06:15.516 CC lib/util/iov.o 00:06:15.516 CC lib/util/math.o 00:06:15.516 CC lib/util/pipe.o 00:06:15.516 CC lib/util/strerror_tls.o 00:06:15.516 LIB libspdk_ioat.a 00:06:15.516 CC lib/util/string.o 00:06:15.516 LIB libspdk_vfio_user.a 00:06:15.516 SO libspdk_ioat.so.7.0 00:06:15.516 SO libspdk_vfio_user.so.5.0 00:06:15.516 SYMLINK libspdk_ioat.so 00:06:15.516 CC lib/util/uuid.o 00:06:15.516 CC lib/util/fd_group.o 00:06:15.516 CC lib/util/xor.o 00:06:15.516 CC lib/util/zipf.o 00:06:15.516 SYMLINK libspdk_vfio_user.so 00:06:15.774 LIB libspdk_util.a 00:06:16.032 SO libspdk_util.so.9.0 00:06:16.032 LIB libspdk_trace_parser.a 00:06:16.032 SO libspdk_trace_parser.so.5.0 00:06:16.032 SYMLINK libspdk_util.so 00:06:16.290 SYMLINK libspdk_trace_parser.so 00:06:16.290 CC lib/conf/conf.o 00:06:16.290 CC lib/json/json_parse.o 00:06:16.290 CC lib/vmd/vmd.o 00:06:16.290 CC lib/json/json_util.o 00:06:16.290 CC lib/json/json_write.o 00:06:16.290 CC lib/vmd/led.o 00:06:16.290 CC lib/idxd/idxd.o 00:06:16.290 CC lib/idxd/idxd_user.o 00:06:16.290 CC lib/rdma/common.o 00:06:16.290 CC lib/env_dpdk/env.o 00:06:16.548 CC lib/env_dpdk/memory.o 00:06:16.807 CC lib/env_dpdk/pci.o 00:06:16.807 LIB libspdk_conf.a 00:06:16.807 CC lib/rdma/rdma_verbs.o 00:06:16.807 CC lib/env_dpdk/init.o 00:06:16.807 CC lib/env_dpdk/threads.o 00:06:16.807 SO libspdk_conf.so.6.0 00:06:16.807 LIB libspdk_json.a 00:06:16.807 SYMLINK libspdk_conf.so 00:06:16.807 CC lib/env_dpdk/pci_ioat.o 00:06:16.807 SO libspdk_json.so.6.0 00:06:16.807 LIB libspdk_rdma.a 00:06:17.065 SO libspdk_rdma.so.6.0 00:06:17.065 SYMLINK libspdk_json.so 00:06:17.065 CC lib/env_dpdk/pci_virtio.o 00:06:17.065 SYMLINK libspdk_rdma.so 00:06:17.065 CC lib/env_dpdk/pci_vmd.o 00:06:17.065 CC lib/env_dpdk/pci_idxd.o 00:06:17.065 CC lib/env_dpdk/pci_event.o 00:06:17.065 CC lib/env_dpdk/sigbus_handler.o 00:06:17.065 LIB libspdk_idxd.a 00:06:17.065 SO libspdk_idxd.so.12.0 00:06:17.065 CC lib/env_dpdk/pci_dpdk.o 00:06:17.323 CC lib/jsonrpc/jsonrpc_server.o 00:06:17.323 SYMLINK libspdk_idxd.so 00:06:17.323 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:17.323 CC lib/jsonrpc/jsonrpc_client.o 00:06:17.323 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:17.323 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:17.323 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:17.323 LIB libspdk_vmd.a 00:06:17.323 SO libspdk_vmd.so.6.0 00:06:17.581 SYMLINK libspdk_vmd.so 00:06:17.581 LIB libspdk_jsonrpc.a 00:06:17.581 SO libspdk_jsonrpc.so.6.0 00:06:17.840 SYMLINK libspdk_jsonrpc.so 00:06:17.840 CC lib/rpc/rpc.o 00:06:18.097 LIB libspdk_rpc.a 00:06:18.356 LIB libspdk_env_dpdk.a 00:06:18.356 SO libspdk_rpc.so.6.0 00:06:18.356 SYMLINK libspdk_rpc.so 00:06:18.356 SO libspdk_env_dpdk.so.14.0 00:06:18.667 CC lib/trace/trace.o 00:06:18.667 CC lib/trace/trace_flags.o 00:06:18.667 CC lib/trace/trace_rpc.o 00:06:18.667 CC lib/keyring/keyring.o 00:06:18.667 CC lib/keyring/keyring_rpc.o 00:06:18.667 CC lib/notify/notify.o 00:06:18.667 CC lib/notify/notify_rpc.o 00:06:18.667 SYMLINK libspdk_env_dpdk.so 00:06:18.667 LIB libspdk_keyring.a 00:06:18.667 LIB libspdk_notify.a 00:06:18.667 SO libspdk_keyring.so.1.0 00:06:18.925 SO libspdk_notify.so.6.0 00:06:18.925 LIB libspdk_trace.a 00:06:18.925 SYMLINK libspdk_keyring.so 00:06:18.925 SO libspdk_trace.so.10.0 00:06:18.925 SYMLINK libspdk_notify.so 00:06:18.925 SYMLINK libspdk_trace.so 00:06:19.184 CC lib/sock/sock.o 00:06:19.184 CC lib/sock/sock_rpc.o 00:06:19.184 CC lib/thread/thread.o 00:06:19.184 CC lib/thread/iobuf.o 00:06:19.751 LIB libspdk_sock.a 00:06:19.751 SO libspdk_sock.so.9.0 00:06:19.751 SYMLINK libspdk_sock.so 00:06:20.019 CC lib/nvme/nvme_ctrlr.o 00:06:20.019 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:20.019 CC lib/nvme/nvme_ns_cmd.o 00:06:20.019 CC lib/nvme/nvme_fabric.o 00:06:20.020 CC lib/nvme/nvme_pcie.o 00:06:20.020 CC lib/nvme/nvme_pcie_common.o 00:06:20.020 CC lib/nvme/nvme_ns.o 00:06:20.020 CC lib/nvme/nvme_qpair.o 00:06:20.020 CC lib/nvme/nvme.o 00:06:20.983 CC lib/nvme/nvme_quirks.o 00:06:20.983 LIB libspdk_thread.a 00:06:20.983 CC lib/nvme/nvme_transport.o 00:06:20.983 SO libspdk_thread.so.10.0 00:06:20.983 SYMLINK libspdk_thread.so 00:06:20.983 CC lib/nvme/nvme_discovery.o 00:06:20.983 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:20.983 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:20.983 CC lib/nvme/nvme_tcp.o 00:06:21.241 CC lib/accel/accel.o 00:06:21.241 CC lib/accel/accel_rpc.o 00:06:21.241 CC lib/accel/accel_sw.o 00:06:21.497 CC lib/nvme/nvme_opal.o 00:06:21.497 CC lib/nvme/nvme_io_msg.o 00:06:21.497 CC lib/nvme/nvme_poll_group.o 00:06:21.754 CC lib/nvme/nvme_zns.o 00:06:21.754 CC lib/nvme/nvme_stubs.o 00:06:21.754 CC lib/blob/blobstore.o 00:06:22.012 CC lib/nvme/nvme_auth.o 00:06:22.012 CC lib/blob/request.o 00:06:22.270 LIB libspdk_accel.a 00:06:22.270 CC lib/nvme/nvme_cuse.o 00:06:22.270 SO libspdk_accel.so.15.0 00:06:22.270 SYMLINK libspdk_accel.so 00:06:22.270 CC lib/blob/zeroes.o 00:06:22.270 CC lib/blob/blob_bs_dev.o 00:06:22.529 CC lib/nvme/nvme_rdma.o 00:06:22.529 CC lib/init/json_config.o 00:06:22.529 CC lib/init/subsystem.o 00:06:22.787 CC lib/virtio/virtio.o 00:06:22.787 CC lib/bdev/bdev.o 00:06:22.787 CC lib/bdev/bdev_rpc.o 00:06:23.045 CC lib/bdev/bdev_zone.o 00:06:23.045 CC lib/init/subsystem_rpc.o 00:06:23.045 CC lib/virtio/virtio_vhost_user.o 00:06:23.303 CC lib/bdev/part.o 00:06:23.303 CC lib/init/rpc.o 00:06:23.303 CC lib/bdev/scsi_nvme.o 00:06:23.303 CC lib/virtio/virtio_vfio_user.o 00:06:23.303 LIB libspdk_init.a 00:06:23.303 SO libspdk_init.so.5.0 00:06:23.561 CC lib/virtio/virtio_pci.o 00:06:23.561 SYMLINK libspdk_init.so 00:06:23.819 CC lib/event/app.o 00:06:23.819 CC lib/event/reactor.o 00:06:23.819 CC lib/event/log_rpc.o 00:06:23.819 CC lib/event/app_rpc.o 00:06:23.819 CC lib/event/scheduler_static.o 00:06:23.819 LIB libspdk_nvme.a 00:06:23.819 LIB libspdk_virtio.a 00:06:23.819 SO libspdk_virtio.so.7.0 00:06:24.078 SO libspdk_nvme.so.13.0 00:06:24.078 SYMLINK libspdk_virtio.so 00:06:24.078 LIB libspdk_event.a 00:06:24.337 SO libspdk_event.so.13.0 00:06:24.337 SYMLINK libspdk_nvme.so 00:06:24.337 SYMLINK libspdk_event.so 00:06:25.284 LIB libspdk_blob.a 00:06:25.284 SO libspdk_blob.so.11.0 00:06:25.561 SYMLINK libspdk_blob.so 00:06:25.561 LIB libspdk_bdev.a 00:06:25.561 SO libspdk_bdev.so.15.0 00:06:25.820 CC lib/blobfs/blobfs.o 00:06:25.820 CC lib/blobfs/tree.o 00:06:25.820 CC lib/lvol/lvol.o 00:06:25.820 SYMLINK libspdk_bdev.so 00:06:26.078 CC lib/ublk/ublk.o 00:06:26.078 CC lib/ftl/ftl_core.o 00:06:26.078 CC lib/ublk/ublk_rpc.o 00:06:26.078 CC lib/ftl/ftl_init.o 00:06:26.078 CC lib/ftl/ftl_layout.o 00:06:26.078 CC lib/scsi/dev.o 00:06:26.078 CC lib/nvmf/ctrlr.o 00:06:26.078 CC lib/nbd/nbd.o 00:06:26.336 CC lib/nbd/nbd_rpc.o 00:06:26.336 CC lib/scsi/lun.o 00:06:26.336 CC lib/ftl/ftl_debug.o 00:06:26.336 CC lib/ftl/ftl_io.o 00:06:26.336 CC lib/ftl/ftl_sb.o 00:06:26.336 CC lib/ftl/ftl_l2p.o 00:06:26.594 CC lib/ftl/ftl_l2p_flat.o 00:06:26.594 LIB libspdk_nbd.a 00:06:26.594 CC lib/scsi/port.o 00:06:26.594 LIB libspdk_blobfs.a 00:06:26.594 SO libspdk_nbd.so.7.0 00:06:26.594 SO libspdk_blobfs.so.10.0 00:06:26.594 CC lib/ftl/ftl_nv_cache.o 00:06:26.594 CC lib/ftl/ftl_band.o 00:06:26.595 SYMLINK libspdk_nbd.so 00:06:26.595 LIB libspdk_ublk.a 00:06:26.595 CC lib/scsi/scsi.o 00:06:26.595 SYMLINK libspdk_blobfs.so 00:06:26.853 CC lib/scsi/scsi_bdev.o 00:06:26.853 CC lib/scsi/scsi_pr.o 00:06:26.853 SO libspdk_ublk.so.3.0 00:06:26.853 CC lib/nvmf/ctrlr_discovery.o 00:06:26.853 SYMLINK libspdk_ublk.so 00:06:26.853 CC lib/scsi/scsi_rpc.o 00:06:26.853 CC lib/ftl/ftl_band_ops.o 00:06:26.853 LIB libspdk_lvol.a 00:06:26.853 CC lib/scsi/task.o 00:06:26.853 SO libspdk_lvol.so.10.0 00:06:26.853 CC lib/nvmf/ctrlr_bdev.o 00:06:27.109 SYMLINK libspdk_lvol.so 00:06:27.109 CC lib/ftl/ftl_writer.o 00:06:27.109 CC lib/nvmf/subsystem.o 00:06:27.109 CC lib/ftl/ftl_rq.o 00:06:27.109 CC lib/ftl/ftl_reloc.o 00:06:27.109 CC lib/ftl/ftl_l2p_cache.o 00:06:27.109 LIB libspdk_scsi.a 00:06:27.367 SO libspdk_scsi.so.9.0 00:06:27.367 CC lib/ftl/ftl_p2l.o 00:06:27.367 CC lib/ftl/mngt/ftl_mngt.o 00:06:27.367 CC lib/nvmf/nvmf.o 00:06:27.367 SYMLINK libspdk_scsi.so 00:06:27.367 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:27.624 CC lib/nvmf/nvmf_rpc.o 00:06:27.624 CC lib/nvmf/transport.o 00:06:27.882 CC lib/nvmf/tcp.o 00:06:28.139 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:28.139 CC lib/nvmf/stubs.o 00:06:28.139 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:28.139 CC lib/iscsi/conn.o 00:06:28.139 CC lib/nvmf/rdma.o 00:06:28.397 CC lib/vhost/vhost.o 00:06:28.397 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:28.397 CC lib/nvmf/auth.o 00:06:28.656 CC lib/iscsi/init_grp.o 00:06:28.922 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:28.922 CC lib/vhost/vhost_rpc.o 00:06:28.922 CC lib/vhost/vhost_scsi.o 00:06:28.922 CC lib/iscsi/iscsi.o 00:06:28.922 CC lib/vhost/vhost_blk.o 00:06:28.922 CC lib/vhost/rte_vhost_user.o 00:06:29.187 CC lib/iscsi/md5.o 00:06:29.187 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:29.187 CC lib/iscsi/param.o 00:06:29.446 CC lib/iscsi/portal_grp.o 00:06:29.703 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:29.703 CC lib/iscsi/tgt_node.o 00:06:29.703 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:29.961 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:29.961 CC lib/iscsi/iscsi_subsystem.o 00:06:30.218 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:30.218 CC lib/iscsi/iscsi_rpc.o 00:06:30.218 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:30.487 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:30.487 CC lib/iscsi/task.o 00:06:30.487 CC lib/ftl/utils/ftl_conf.o 00:06:30.487 CC lib/ftl/utils/ftl_md.o 00:06:30.487 CC lib/ftl/utils/ftl_mempool.o 00:06:30.761 CC lib/ftl/utils/ftl_bitmap.o 00:06:30.761 CC lib/ftl/utils/ftl_property.o 00:06:30.761 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:30.761 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:30.761 LIB libspdk_vhost.a 00:06:30.761 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:31.018 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:31.018 LIB libspdk_iscsi.a 00:06:31.018 SO libspdk_vhost.so.8.0 00:06:31.018 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:31.018 SO libspdk_iscsi.so.8.0 00:06:31.018 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:31.276 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:31.276 SYMLINK libspdk_vhost.so 00:06:31.276 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:31.276 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:31.276 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:31.276 CC lib/ftl/base/ftl_base_dev.o 00:06:31.276 CC lib/ftl/base/ftl_base_bdev.o 00:06:31.276 SYMLINK libspdk_iscsi.so 00:06:31.276 CC lib/ftl/ftl_trace.o 00:06:31.534 LIB libspdk_ftl.a 00:06:31.792 LIB libspdk_nvmf.a 00:06:31.792 SO libspdk_ftl.so.9.0 00:06:32.049 SO libspdk_nvmf.so.18.0 00:06:32.307 SYMLINK libspdk_nvmf.so 00:06:32.307 SYMLINK libspdk_ftl.so 00:06:32.871 CC module/env_dpdk/env_dpdk_rpc.o 00:06:32.871 CC module/accel/dsa/accel_dsa.o 00:06:32.871 CC module/blob/bdev/blob_bdev.o 00:06:32.871 CC module/sock/posix/posix.o 00:06:32.871 CC module/accel/iaa/accel_iaa.o 00:06:32.871 CC module/accel/ioat/accel_ioat.o 00:06:32.871 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:32.871 CC module/keyring/file/keyring.o 00:06:32.871 CC module/accel/error/accel_error.o 00:06:32.871 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:32.871 LIB libspdk_env_dpdk_rpc.a 00:06:32.871 SO libspdk_env_dpdk_rpc.so.6.0 00:06:32.871 SYMLINK libspdk_env_dpdk_rpc.so 00:06:32.871 CC module/accel/error/accel_error_rpc.o 00:06:33.129 CC module/accel/dsa/accel_dsa_rpc.o 00:06:33.129 CC module/keyring/file/keyring_rpc.o 00:06:33.129 LIB libspdk_scheduler_dpdk_governor.a 00:06:33.129 LIB libspdk_blob_bdev.a 00:06:33.129 CC module/accel/iaa/accel_iaa_rpc.o 00:06:33.129 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:33.129 CC module/accel/ioat/accel_ioat_rpc.o 00:06:33.129 LIB libspdk_accel_error.a 00:06:33.129 SO libspdk_blob_bdev.so.11.0 00:06:33.129 LIB libspdk_scheduler_dynamic.a 00:06:33.129 SO libspdk_accel_error.so.2.0 00:06:33.129 SO libspdk_scheduler_dynamic.so.4.0 00:06:33.129 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:33.129 SYMLINK libspdk_blob_bdev.so 00:06:33.129 LIB libspdk_keyring_file.a 00:06:33.129 SYMLINK libspdk_scheduler_dynamic.so 00:06:33.387 SO libspdk_keyring_file.so.1.0 00:06:33.387 SYMLINK libspdk_accel_error.so 00:06:33.387 LIB libspdk_accel_dsa.a 00:06:33.387 LIB libspdk_accel_iaa.a 00:06:33.387 SO libspdk_accel_dsa.so.5.0 00:06:33.387 SO libspdk_accel_iaa.so.3.0 00:06:33.387 SYMLINK libspdk_keyring_file.so 00:06:33.387 LIB libspdk_accel_ioat.a 00:06:33.387 SYMLINK libspdk_accel_dsa.so 00:06:33.387 SO libspdk_accel_ioat.so.6.0 00:06:33.387 SYMLINK libspdk_accel_iaa.so 00:06:33.387 CC module/scheduler/gscheduler/gscheduler.o 00:06:33.387 SYMLINK libspdk_accel_ioat.so 00:06:33.645 CC module/bdev/delay/vbdev_delay.o 00:06:33.645 CC module/bdev/error/vbdev_error.o 00:06:33.645 CC module/blobfs/bdev/blobfs_bdev.o 00:06:33.645 CC module/bdev/gpt/gpt.o 00:06:33.645 LIB libspdk_sock_posix.a 00:06:33.645 LIB libspdk_scheduler_gscheduler.a 00:06:33.645 CC module/bdev/lvol/vbdev_lvol.o 00:06:33.645 CC module/bdev/malloc/bdev_malloc.o 00:06:33.645 SO libspdk_sock_posix.so.6.0 00:06:33.645 SO libspdk_scheduler_gscheduler.so.4.0 00:06:33.645 CC module/bdev/null/bdev_null.o 00:06:33.645 SYMLINK libspdk_scheduler_gscheduler.so 00:06:33.645 CC module/bdev/null/bdev_null_rpc.o 00:06:33.645 CC module/bdev/nvme/bdev_nvme.o 00:06:33.645 SYMLINK libspdk_sock_posix.so 00:06:33.645 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:33.903 CC module/bdev/gpt/vbdev_gpt.o 00:06:33.903 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:33.903 CC module/bdev/error/vbdev_error_rpc.o 00:06:33.903 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:34.160 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:34.160 LIB libspdk_bdev_null.a 00:06:34.160 SO libspdk_bdev_null.so.6.0 00:06:34.160 LIB libspdk_bdev_error.a 00:06:34.160 SO libspdk_bdev_error.so.6.0 00:06:34.160 SYMLINK libspdk_bdev_null.so 00:06:34.160 LIB libspdk_blobfs_bdev.a 00:06:34.160 SYMLINK libspdk_bdev_error.so 00:06:34.160 LIB libspdk_bdev_gpt.a 00:06:34.160 LIB libspdk_bdev_malloc.a 00:06:34.160 SO libspdk_blobfs_bdev.so.6.0 00:06:34.160 SO libspdk_bdev_gpt.so.6.0 00:06:34.418 SO libspdk_bdev_malloc.so.6.0 00:06:34.418 CC module/bdev/passthru/vbdev_passthru.o 00:06:34.418 LIB libspdk_bdev_delay.a 00:06:34.418 SYMLINK libspdk_bdev_malloc.so 00:06:34.418 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:34.418 SYMLINK libspdk_blobfs_bdev.so 00:06:34.418 SYMLINK libspdk_bdev_gpt.so 00:06:34.418 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:34.418 SO libspdk_bdev_delay.so.6.0 00:06:34.418 CC module/bdev/raid/bdev_raid.o 00:06:34.418 LIB libspdk_bdev_lvol.a 00:06:34.418 CC module/bdev/split/vbdev_split.o 00:06:34.418 SYMLINK libspdk_bdev_delay.so 00:06:34.676 SO libspdk_bdev_lvol.so.6.0 00:06:34.676 SYMLINK libspdk_bdev_lvol.so 00:06:34.676 CC module/bdev/nvme/nvme_rpc.o 00:06:34.676 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:34.676 CC module/bdev/aio/bdev_aio.o 00:06:34.676 CC module/bdev/nvme/bdev_mdns_client.o 00:06:34.676 LIB libspdk_bdev_passthru.a 00:06:34.676 CC module/bdev/ftl/bdev_ftl.o 00:06:34.676 SO libspdk_bdev_passthru.so.6.0 00:06:34.934 SYMLINK libspdk_bdev_passthru.so 00:06:34.934 CC module/bdev/split/vbdev_split_rpc.o 00:06:34.934 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:34.934 LIB libspdk_bdev_split.a 00:06:34.934 CC module/bdev/aio/bdev_aio_rpc.o 00:06:34.934 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:35.192 SO libspdk_bdev_split.so.6.0 00:06:35.192 CC module/bdev/nvme/vbdev_opal.o 00:06:35.192 LIB libspdk_bdev_ftl.a 00:06:35.192 CC module/bdev/raid/bdev_raid_rpc.o 00:06:35.192 SO libspdk_bdev_ftl.so.6.0 00:06:35.192 SYMLINK libspdk_bdev_split.so 00:06:35.192 CC module/bdev/raid/bdev_raid_sb.o 00:06:35.192 LIB libspdk_bdev_zone_block.a 00:06:35.192 SYMLINK libspdk_bdev_ftl.so 00:06:35.192 CC module/bdev/raid/raid0.o 00:06:35.192 CC module/bdev/iscsi/bdev_iscsi.o 00:06:35.192 LIB libspdk_bdev_aio.a 00:06:35.192 SO libspdk_bdev_zone_block.so.6.0 00:06:35.451 SO libspdk_bdev_aio.so.6.0 00:06:35.451 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:35.451 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:35.451 SYMLINK libspdk_bdev_zone_block.so 00:06:35.451 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:35.451 SYMLINK libspdk_bdev_aio.so 00:06:35.451 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:35.451 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:35.451 CC module/bdev/raid/raid1.o 00:06:35.709 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:35.709 CC module/bdev/raid/concat.o 00:06:36.003 LIB libspdk_bdev_iscsi.a 00:06:36.003 SO libspdk_bdev_iscsi.so.6.0 00:06:36.003 SYMLINK libspdk_bdev_iscsi.so 00:06:36.003 LIB libspdk_bdev_raid.a 00:06:36.282 SO libspdk_bdev_raid.so.6.0 00:06:36.282 LIB libspdk_bdev_virtio.a 00:06:36.282 SYMLINK libspdk_bdev_raid.so 00:06:36.282 SO libspdk_bdev_virtio.so.6.0 00:06:36.282 SYMLINK libspdk_bdev_virtio.so 00:06:37.215 LIB libspdk_bdev_nvme.a 00:06:37.215 SO libspdk_bdev_nvme.so.7.0 00:06:37.215 SYMLINK libspdk_bdev_nvme.so 00:06:37.780 CC module/event/subsystems/vmd/vmd.o 00:06:37.780 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:37.780 CC module/event/subsystems/iobuf/iobuf.o 00:06:37.780 CC module/event/subsystems/keyring/keyring.o 00:06:37.780 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:37.780 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:37.780 CC module/event/subsystems/sock/sock.o 00:06:37.780 CC module/event/subsystems/scheduler/scheduler.o 00:06:38.037 LIB libspdk_event_vhost_blk.a 00:06:38.037 LIB libspdk_event_keyring.a 00:06:38.037 LIB libspdk_event_sock.a 00:06:38.037 SO libspdk_event_vhost_blk.so.3.0 00:06:38.037 SO libspdk_event_keyring.so.1.0 00:06:38.037 SO libspdk_event_sock.so.5.0 00:06:38.037 LIB libspdk_event_vmd.a 00:06:38.037 SYMLINK libspdk_event_keyring.so 00:06:38.037 SYMLINK libspdk_event_sock.so 00:06:38.037 LIB libspdk_event_iobuf.a 00:06:38.037 SYMLINK libspdk_event_vhost_blk.so 00:06:38.037 LIB libspdk_event_scheduler.a 00:06:38.037 SO libspdk_event_vmd.so.6.0 00:06:38.037 SO libspdk_event_scheduler.so.4.0 00:06:38.037 SO libspdk_event_iobuf.so.3.0 00:06:38.037 SYMLINK libspdk_event_vmd.so 00:06:38.037 SYMLINK libspdk_event_iobuf.so 00:06:38.037 SYMLINK libspdk_event_scheduler.so 00:06:38.295 CC module/event/subsystems/accel/accel.o 00:06:38.553 LIB libspdk_event_accel.a 00:06:38.553 SO libspdk_event_accel.so.6.0 00:06:38.553 SYMLINK libspdk_event_accel.so 00:06:38.812 CC module/event/subsystems/bdev/bdev.o 00:06:39.071 LIB libspdk_event_bdev.a 00:06:39.071 SO libspdk_event_bdev.so.6.0 00:06:39.329 SYMLINK libspdk_event_bdev.so 00:06:39.587 CC module/event/subsystems/ublk/ublk.o 00:06:39.587 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:39.587 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:39.587 CC module/event/subsystems/nbd/nbd.o 00:06:39.587 CC module/event/subsystems/scsi/scsi.o 00:06:39.587 LIB libspdk_event_scsi.a 00:06:39.587 LIB libspdk_event_ublk.a 00:06:39.587 LIB libspdk_event_nbd.a 00:06:39.845 SO libspdk_event_scsi.so.6.0 00:06:39.845 SO libspdk_event_nbd.so.6.0 00:06:39.845 SO libspdk_event_ublk.so.3.0 00:06:39.845 LIB libspdk_event_nvmf.a 00:06:39.845 SYMLINK libspdk_event_scsi.so 00:06:39.845 SYMLINK libspdk_event_nbd.so 00:06:39.845 SYMLINK libspdk_event_ublk.so 00:06:39.845 SO libspdk_event_nvmf.so.6.0 00:06:39.845 SYMLINK libspdk_event_nvmf.so 00:06:40.104 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:40.104 CC module/event/subsystems/iscsi/iscsi.o 00:06:40.363 LIB libspdk_event_iscsi.a 00:06:40.363 LIB libspdk_event_vhost_scsi.a 00:06:40.363 SO libspdk_event_iscsi.so.6.0 00:06:40.363 SO libspdk_event_vhost_scsi.so.3.0 00:06:40.363 SYMLINK libspdk_event_iscsi.so 00:06:40.363 SYMLINK libspdk_event_vhost_scsi.so 00:06:40.642 SO libspdk.so.6.0 00:06:40.642 SYMLINK libspdk.so 00:06:40.901 CXX app/trace/trace.o 00:06:40.901 CC examples/ioat/perf/perf.o 00:06:40.901 CC examples/sock/hello_world/hello_sock.o 00:06:40.901 CC examples/nvme/hello_world/hello_world.o 00:06:40.901 CC examples/vmd/lsvmd/lsvmd.o 00:06:40.901 CC examples/accel/perf/accel_perf.o 00:06:40.901 CC examples/nvmf/nvmf/nvmf.o 00:06:40.901 CC examples/blob/hello_world/hello_blob.o 00:06:40.901 CC test/accel/dif/dif.o 00:06:40.901 CC examples/bdev/hello_world/hello_bdev.o 00:06:41.162 LINK lsvmd 00:06:41.162 LINK ioat_perf 00:06:41.162 LINK hello_bdev 00:06:41.162 LINK nvmf 00:06:41.420 LINK spdk_trace 00:06:41.420 LINK hello_world 00:06:41.420 LINK hello_sock 00:06:41.420 CC examples/ioat/verify/verify.o 00:06:41.420 LINK hello_blob 00:06:41.678 CC examples/vmd/led/led.o 00:06:41.678 CC app/trace_record/trace_record.o 00:06:41.678 LINK dif 00:06:41.678 LINK accel_perf 00:06:41.678 CC examples/bdev/bdevperf/bdevperf.o 00:06:41.678 CC examples/nvme/reconnect/reconnect.o 00:06:41.678 LINK verify 00:06:41.678 CC app/nvmf_tgt/nvmf_main.o 00:06:41.936 LINK led 00:06:41.936 CC examples/blob/cli/blobcli.o 00:06:41.936 LINK spdk_trace_record 00:06:41.936 CC test/app/bdev_svc/bdev_svc.o 00:06:42.195 LINK nvmf_tgt 00:06:42.195 LINK reconnect 00:06:42.195 LINK bdev_svc 00:06:42.195 CC test/bdev/bdevio/bdevio.o 00:06:42.195 TEST_HEADER include/spdk/accel.h 00:06:42.195 TEST_HEADER include/spdk/accel_module.h 00:06:42.195 TEST_HEADER include/spdk/assert.h 00:06:42.195 TEST_HEADER include/spdk/barrier.h 00:06:42.195 TEST_HEADER include/spdk/base64.h 00:06:42.195 TEST_HEADER include/spdk/bdev.h 00:06:42.195 TEST_HEADER include/spdk/bdev_module.h 00:06:42.195 TEST_HEADER include/spdk/bdev_zone.h 00:06:42.195 TEST_HEADER include/spdk/bit_array.h 00:06:42.195 TEST_HEADER include/spdk/bit_pool.h 00:06:42.195 TEST_HEADER include/spdk/blob_bdev.h 00:06:42.195 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:42.195 TEST_HEADER include/spdk/blobfs.h 00:06:42.195 TEST_HEADER include/spdk/blob.h 00:06:42.195 TEST_HEADER include/spdk/conf.h 00:06:42.195 TEST_HEADER include/spdk/config.h 00:06:42.195 TEST_HEADER include/spdk/cpuset.h 00:06:42.195 TEST_HEADER include/spdk/crc16.h 00:06:42.195 TEST_HEADER include/spdk/crc32.h 00:06:42.195 TEST_HEADER include/spdk/crc64.h 00:06:42.195 TEST_HEADER include/spdk/dif.h 00:06:42.195 TEST_HEADER include/spdk/dma.h 00:06:42.195 TEST_HEADER include/spdk/endian.h 00:06:42.195 TEST_HEADER include/spdk/env_dpdk.h 00:06:42.195 TEST_HEADER include/spdk/env.h 00:06:42.195 TEST_HEADER include/spdk/event.h 00:06:42.195 TEST_HEADER include/spdk/fd_group.h 00:06:42.195 TEST_HEADER include/spdk/fd.h 00:06:42.195 TEST_HEADER include/spdk/file.h 00:06:42.195 TEST_HEADER include/spdk/ftl.h 00:06:42.195 TEST_HEADER include/spdk/gpt_spec.h 00:06:42.195 TEST_HEADER include/spdk/hexlify.h 00:06:42.195 TEST_HEADER include/spdk/histogram_data.h 00:06:42.195 TEST_HEADER include/spdk/idxd.h 00:06:42.195 TEST_HEADER include/spdk/idxd_spec.h 00:06:42.195 TEST_HEADER include/spdk/init.h 00:06:42.195 TEST_HEADER include/spdk/ioat.h 00:06:42.195 TEST_HEADER include/spdk/ioat_spec.h 00:06:42.195 CC test/blobfs/mkfs/mkfs.o 00:06:42.195 TEST_HEADER include/spdk/iscsi_spec.h 00:06:42.195 TEST_HEADER include/spdk/json.h 00:06:42.454 TEST_HEADER include/spdk/jsonrpc.h 00:06:42.454 TEST_HEADER include/spdk/keyring.h 00:06:42.454 TEST_HEADER include/spdk/keyring_module.h 00:06:42.454 TEST_HEADER include/spdk/likely.h 00:06:42.454 TEST_HEADER include/spdk/log.h 00:06:42.454 TEST_HEADER include/spdk/lvol.h 00:06:42.454 TEST_HEADER include/spdk/memory.h 00:06:42.454 TEST_HEADER include/spdk/mmio.h 00:06:42.454 TEST_HEADER include/spdk/nbd.h 00:06:42.454 TEST_HEADER include/spdk/notify.h 00:06:42.454 TEST_HEADER include/spdk/nvme.h 00:06:42.454 TEST_HEADER include/spdk/nvme_intel.h 00:06:42.454 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:42.454 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:42.454 TEST_HEADER include/spdk/nvme_spec.h 00:06:42.454 TEST_HEADER include/spdk/nvme_zns.h 00:06:42.454 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:42.454 CC test/dma/test_dma/test_dma.o 00:06:42.454 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:42.454 TEST_HEADER include/spdk/nvmf.h 00:06:42.454 TEST_HEADER include/spdk/nvmf_spec.h 00:06:42.454 TEST_HEADER include/spdk/nvmf_transport.h 00:06:42.454 TEST_HEADER include/spdk/opal.h 00:06:42.454 TEST_HEADER include/spdk/opal_spec.h 00:06:42.454 TEST_HEADER include/spdk/pci_ids.h 00:06:42.454 TEST_HEADER include/spdk/pipe.h 00:06:42.454 TEST_HEADER include/spdk/queue.h 00:06:42.454 TEST_HEADER include/spdk/reduce.h 00:06:42.454 TEST_HEADER include/spdk/rpc.h 00:06:42.454 TEST_HEADER include/spdk/scheduler.h 00:06:42.454 TEST_HEADER include/spdk/scsi.h 00:06:42.454 TEST_HEADER include/spdk/scsi_spec.h 00:06:42.454 TEST_HEADER include/spdk/sock.h 00:06:42.454 TEST_HEADER include/spdk/stdinc.h 00:06:42.454 TEST_HEADER include/spdk/string.h 00:06:42.454 TEST_HEADER include/spdk/thread.h 00:06:42.454 TEST_HEADER include/spdk/trace.h 00:06:42.454 TEST_HEADER include/spdk/trace_parser.h 00:06:42.454 TEST_HEADER include/spdk/tree.h 00:06:42.454 TEST_HEADER include/spdk/ublk.h 00:06:42.454 TEST_HEADER include/spdk/util.h 00:06:42.454 LINK blobcli 00:06:42.454 TEST_HEADER include/spdk/uuid.h 00:06:42.454 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:42.454 TEST_HEADER include/spdk/version.h 00:06:42.454 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:42.454 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:42.454 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:42.454 TEST_HEADER include/spdk/vhost.h 00:06:42.454 TEST_HEADER include/spdk/vmd.h 00:06:42.454 TEST_HEADER include/spdk/xor.h 00:06:42.454 TEST_HEADER include/spdk/zipf.h 00:06:42.454 CXX test/cpp_headers/accel.o 00:06:42.454 CC test/app/histogram_perf/histogram_perf.o 00:06:42.712 LINK mkfs 00:06:42.712 CC app/iscsi_tgt/iscsi_tgt.o 00:06:42.712 LINK histogram_perf 00:06:42.712 LINK bdevio 00:06:42.712 CXX test/cpp_headers/accel_module.o 00:06:42.971 LINK test_dma 00:06:42.971 CXX test/cpp_headers/assert.o 00:06:42.971 LINK iscsi_tgt 00:06:42.971 LINK bdevperf 00:06:43.229 CC examples/nvme/arbitration/arbitration.o 00:06:43.230 LINK nvme_fuzz 00:06:43.230 CC app/spdk_tgt/spdk_tgt.o 00:06:43.230 CC examples/nvme/hotplug/hotplug.o 00:06:43.230 CXX test/cpp_headers/barrier.o 00:06:43.230 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:43.230 CXX test/cpp_headers/base64.o 00:06:43.230 LINK nvme_manage 00:06:43.488 CXX test/cpp_headers/bdev.o 00:06:43.488 CXX test/cpp_headers/bdev_module.o 00:06:43.488 LINK spdk_tgt 00:06:43.488 CC test/env/mem_callbacks/mem_callbacks.o 00:06:43.488 LINK arbitration 00:06:43.488 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:43.745 LINK hotplug 00:06:43.745 CC app/spdk_lspci/spdk_lspci.o 00:06:43.745 CXX test/cpp_headers/bdev_zone.o 00:06:43.745 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:43.745 CC test/event/event_perf/event_perf.o 00:06:43.745 LINK spdk_lspci 00:06:44.003 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:44.003 CC test/lvol/esnap/esnap.o 00:06:44.003 CC examples/nvme/abort/abort.o 00:06:44.003 LINK event_perf 00:06:44.003 CXX test/cpp_headers/bit_array.o 00:06:44.262 CC test/nvme/aer/aer.o 00:06:44.262 CC app/spdk_nvme_perf/perf.o 00:06:44.262 LINK mem_callbacks 00:06:44.262 LINK cmb_copy 00:06:44.521 CXX test/cpp_headers/bit_pool.o 00:06:44.521 CC test/event/reactor/reactor.o 00:06:44.521 LINK abort 00:06:44.521 LINK vhost_fuzz 00:06:44.521 LINK aer 00:06:44.521 CXX test/cpp_headers/blob_bdev.o 00:06:44.521 CC test/env/vtophys/vtophys.o 00:06:44.521 LINK reactor 00:06:44.780 CC app/spdk_nvme_identify/identify.o 00:06:44.780 CC app/spdk_nvme_discover/discovery_aer.o 00:06:44.780 LINK vtophys 00:06:44.780 CC test/nvme/reset/reset.o 00:06:44.780 CXX test/cpp_headers/blobfs_bdev.o 00:06:44.780 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:45.038 CC test/event/reactor_perf/reactor_perf.o 00:06:45.038 LINK spdk_nvme_discover 00:06:45.038 LINK pmr_persistence 00:06:45.038 LINK reset 00:06:45.038 CXX test/cpp_headers/blobfs.o 00:06:45.038 LINK reactor_perf 00:06:45.296 LINK iscsi_fuzz 00:06:45.296 LINK spdk_nvme_perf 00:06:45.296 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:45.296 CXX test/cpp_headers/blob.o 00:06:45.296 CC test/nvme/sgl/sgl.o 00:06:45.296 CXX test/cpp_headers/conf.o 00:06:45.554 CC app/spdk_top/spdk_top.o 00:06:45.554 CC examples/util/zipf/zipf.o 00:06:45.554 LINK env_dpdk_post_init 00:06:45.554 CC test/app/jsoncat/jsoncat.o 00:06:45.554 CXX test/cpp_headers/config.o 00:06:45.554 CC test/event/app_repeat/app_repeat.o 00:06:45.554 LINK zipf 00:06:45.554 CXX test/cpp_headers/cpuset.o 00:06:45.554 CC app/vhost/vhost.o 00:06:45.813 LINK sgl 00:06:45.813 LINK spdk_nvme_identify 00:06:45.813 LINK jsoncat 00:06:45.813 LINK app_repeat 00:06:46.071 LINK vhost 00:06:46.071 CXX test/cpp_headers/crc16.o 00:06:46.071 CC test/env/memory/memory_ut.o 00:06:46.071 CC test/nvme/e2edp/nvme_dp.o 00:06:46.071 CC test/env/pci/pci_ut.o 00:06:46.071 CXX test/cpp_headers/crc32.o 00:06:46.071 CC test/app/stub/stub.o 00:06:46.330 CC examples/thread/thread/thread_ex.o 00:06:46.330 CXX test/cpp_headers/crc64.o 00:06:46.330 CC test/event/scheduler/scheduler.o 00:06:46.589 CXX test/cpp_headers/dif.o 00:06:46.589 LINK stub 00:06:46.589 CC examples/idxd/perf/perf.o 00:06:46.589 LINK nvme_dp 00:06:46.589 LINK thread 00:06:46.589 LINK spdk_top 00:06:46.847 LINK scheduler 00:06:46.847 LINK pci_ut 00:06:46.847 CXX test/cpp_headers/dma.o 00:06:46.847 CC test/nvme/overhead/overhead.o 00:06:47.115 LINK memory_ut 00:06:47.115 CC app/spdk_dd/spdk_dd.o 00:06:47.115 CXX test/cpp_headers/endian.o 00:06:47.115 CXX test/cpp_headers/env_dpdk.o 00:06:47.115 CXX test/cpp_headers/env.o 00:06:47.115 LINK idxd_perf 00:06:47.115 LINK overhead 00:06:47.116 CXX test/cpp_headers/event.o 00:06:47.377 CXX test/cpp_headers/fd_group.o 00:06:47.377 CXX test/cpp_headers/fd.o 00:06:47.377 CXX test/cpp_headers/file.o 00:06:47.377 CXX test/cpp_headers/ftl.o 00:06:47.377 CC test/nvme/err_injection/err_injection.o 00:06:47.377 CXX test/cpp_headers/gpt_spec.o 00:06:47.377 LINK spdk_dd 00:06:47.636 CC test/rpc_client/rpc_client_test.o 00:06:47.636 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:47.636 CC test/thread/poller_perf/poller_perf.o 00:06:47.636 CXX test/cpp_headers/hexlify.o 00:06:47.636 LINK err_injection 00:06:47.636 LINK rpc_client_test 00:06:47.895 CC test/nvme/startup/startup.o 00:06:47.895 LINK interrupt_tgt 00:06:47.895 LINK poller_perf 00:06:47.895 CC app/fio/nvme/fio_plugin.o 00:06:47.895 CC app/fio/bdev/fio_plugin.o 00:06:47.895 CXX test/cpp_headers/histogram_data.o 00:06:47.895 CC test/nvme/reserve/reserve.o 00:06:47.895 LINK startup 00:06:47.895 CXX test/cpp_headers/idxd.o 00:06:48.153 CC test/nvme/simple_copy/simple_copy.o 00:06:48.153 CC test/nvme/connect_stress/connect_stress.o 00:06:48.153 CC test/nvme/boot_partition/boot_partition.o 00:06:48.153 LINK reserve 00:06:48.153 CXX test/cpp_headers/idxd_spec.o 00:06:48.412 CC test/nvme/compliance/nvme_compliance.o 00:06:48.412 LINK spdk_nvme 00:06:48.412 LINK boot_partition 00:06:48.412 LINK spdk_bdev 00:06:48.412 LINK connect_stress 00:06:48.412 LINK simple_copy 00:06:48.412 CXX test/cpp_headers/init.o 00:06:48.671 CC test/nvme/fused_ordering/fused_ordering.o 00:06:48.671 CXX test/cpp_headers/ioat.o 00:06:48.671 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:48.671 CXX test/cpp_headers/ioat_spec.o 00:06:48.928 CC test/nvme/fdp/fdp.o 00:06:48.928 CXX test/cpp_headers/iscsi_spec.o 00:06:48.928 LINK fused_ordering 00:06:48.928 CXX test/cpp_headers/json.o 00:06:48.928 LINK doorbell_aers 00:06:48.928 CXX test/cpp_headers/jsonrpc.o 00:06:49.186 CC test/nvme/cuse/cuse.o 00:06:49.186 LINK nvme_compliance 00:06:49.186 CXX test/cpp_headers/keyring.o 00:06:49.186 CXX test/cpp_headers/keyring_module.o 00:06:49.186 CXX test/cpp_headers/likely.o 00:06:49.186 CXX test/cpp_headers/log.o 00:06:49.444 CXX test/cpp_headers/lvol.o 00:06:49.444 CXX test/cpp_headers/memory.o 00:06:49.444 CXX test/cpp_headers/mmio.o 00:06:49.444 CXX test/cpp_headers/nbd.o 00:06:49.444 LINK fdp 00:06:49.444 CXX test/cpp_headers/notify.o 00:06:49.444 CXX test/cpp_headers/nvme.o 00:06:49.444 CXX test/cpp_headers/nvme_intel.o 00:06:49.444 CXX test/cpp_headers/nvme_ocssd.o 00:06:49.703 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:49.703 CXX test/cpp_headers/nvme_spec.o 00:06:49.703 CXX test/cpp_headers/nvme_zns.o 00:06:49.703 CXX test/cpp_headers/nvmf_cmd.o 00:06:49.703 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:49.703 CXX test/cpp_headers/nvmf.o 00:06:49.703 CXX test/cpp_headers/nvmf_spec.o 00:06:49.703 CXX test/cpp_headers/nvmf_transport.o 00:06:49.961 CXX test/cpp_headers/opal.o 00:06:49.961 CXX test/cpp_headers/opal_spec.o 00:06:49.961 CXX test/cpp_headers/pci_ids.o 00:06:49.961 CXX test/cpp_headers/pipe.o 00:06:50.219 CXX test/cpp_headers/queue.o 00:06:50.219 CXX test/cpp_headers/reduce.o 00:06:50.219 CXX test/cpp_headers/rpc.o 00:06:50.219 CXX test/cpp_headers/scheduler.o 00:06:50.219 CXX test/cpp_headers/scsi.o 00:06:50.219 CXX test/cpp_headers/scsi_spec.o 00:06:50.219 CXX test/cpp_headers/sock.o 00:06:50.478 CXX test/cpp_headers/stdinc.o 00:06:50.478 CXX test/cpp_headers/string.o 00:06:50.478 CXX test/cpp_headers/thread.o 00:06:50.478 LINK esnap 00:06:50.478 CXX test/cpp_headers/trace.o 00:06:50.478 CXX test/cpp_headers/trace_parser.o 00:06:50.478 CXX test/cpp_headers/tree.o 00:06:50.737 CXX test/cpp_headers/ublk.o 00:06:50.737 CXX test/cpp_headers/util.o 00:06:50.737 LINK cuse 00:06:50.737 CXX test/cpp_headers/uuid.o 00:06:50.737 CXX test/cpp_headers/version.o 00:06:50.737 CXX test/cpp_headers/vfio_user_pci.o 00:06:50.737 CXX test/cpp_headers/vfio_user_spec.o 00:06:50.737 CXX test/cpp_headers/vhost.o 00:06:50.737 CXX test/cpp_headers/vmd.o 00:06:50.737 CXX test/cpp_headers/xor.o 00:06:50.996 CXX test/cpp_headers/zipf.o 00:06:55.182 00:06:55.182 real 1m17.102s 00:06:55.182 user 8m16.873s 00:06:55.182 sys 1m44.441s 00:06:55.182 08:47:11 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:06:55.182 08:47:11 make -- common/autotest_common.sh@10 -- $ set +x 00:06:55.182 ************************************ 00:06:55.182 END TEST make 00:06:55.182 ************************************ 00:06:55.182 08:47:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:55.182 08:47:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:55.182 08:47:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:55.182 08:47:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:55.182 08:47:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:55.182 08:47:11 -- pm/common@44 -- $ pid=5186 00:06:55.182 08:47:11 -- pm/common@50 -- $ kill -TERM 5186 00:06:55.182 08:47:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:55.182 08:47:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:55.182 08:47:11 -- pm/common@44 -- $ pid=5188 00:06:55.182 08:47:11 -- pm/common@50 -- $ kill -TERM 5188 00:06:55.441 08:47:11 -- spdk/autotest.sh@34 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:55.441 08:47:11 -- nvmf/common.sh@7 -- # uname -s 00:06:55.441 08:47:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.441 08:47:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.441 08:47:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.441 08:47:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.441 08:47:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.441 08:47:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.441 08:47:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.441 08:47:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.441 08:47:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.441 08:47:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.441 08:47:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:06:55.441 08:47:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:06:55.441 08:47:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.441 08:47:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.441 08:47:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:55.441 08:47:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.441 08:47:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.441 08:47:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.441 08:47:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.441 08:47:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.441 08:47:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.441 08:47:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.441 08:47:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.441 08:47:11 -- paths/export.sh@5 -- # export PATH 00:06:55.441 08:47:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.441 08:47:11 -- nvmf/common.sh@47 -- # : 0 00:06:55.441 08:47:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.441 08:47:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.441 08:47:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.441 08:47:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.441 08:47:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.441 08:47:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.441 08:47:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.441 08:47:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.441 08:47:11 -- spdk/autotest.sh@36 -- # '[' 0 -ne 0 ']' 00:06:55.441 08:47:11 -- spdk/autotest.sh@41 -- # uname -s 00:06:55.441 08:47:11 -- spdk/autotest.sh@41 -- # '[' Linux = Linux ']' 00:06:55.441 08:47:11 -- spdk/autotest.sh@42 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:55.441 08:47:11 -- spdk/autotest.sh@43 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:55.441 08:47:11 -- spdk/autotest.sh@48 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:55.441 08:47:11 -- spdk/autotest.sh@49 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:55.441 08:47:11 -- spdk/autotest.sh@53 -- # modprobe nbd 00:06:55.441 08:47:11 -- spdk/autotest.sh@55 -- # type -P udevadm 00:06:55.441 08:47:11 -- spdk/autotest.sh@55 -- # udevadm=/usr/sbin/udevadm 00:06:55.441 08:47:11 -- spdk/autotest.sh@56 -- # /usr/sbin/udevadm monitor --property 00:06:55.441 08:47:11 -- spdk/autotest.sh@57 -- # udevadm_pid=54038 00:06:55.441 08:47:11 -- spdk/autotest.sh@62 -- # start_monitor_resources 00:06:55.441 08:47:11 -- pm/common@17 -- # local monitor 00:06:55.441 08:47:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:55.441 08:47:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:55.441 08:47:11 -- pm/common@25 -- # sleep 1 00:06:55.441 08:47:11 -- pm/common@21 -- # date +%s 00:06:55.441 08:47:11 -- pm/common@21 -- # date +%s 00:06:55.441 08:47:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715762831 00:06:55.441 08:47:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715762831 00:06:55.441 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715762831_collect-vmstat.pm.log 00:06:55.441 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715762831_collect-cpu-load.pm.log 00:06:56.376 08:47:12 -- spdk/autotest.sh@64 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:56.376 08:47:12 -- spdk/autotest.sh@66 -- # timing_enter autotest 00:06:56.376 08:47:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:56.376 08:47:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.376 08:47:12 -- spdk/autotest.sh@68 -- # create_test_list 00:06:56.376 08:47:12 -- common/autotest_common.sh@744 -- # xtrace_disable 00:06:56.376 08:47:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.376 08:47:12 -- spdk/autotest.sh@70 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:56.376 08:47:12 -- spdk/autotest.sh@70 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:56.376 08:47:12 -- spdk/autotest.sh@70 -- # src=/home/vagrant/spdk_repo/spdk 00:06:56.376 08:47:12 -- spdk/autotest.sh@71 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:56.376 08:47:12 -- spdk/autotest.sh@72 -- # cd /home/vagrant/spdk_repo/spdk 00:06:56.376 08:47:12 -- spdk/autotest.sh@74 -- # freebsd_update_contigmem_mod 00:06:56.376 08:47:12 -- common/autotest_common.sh@1451 -- # uname 00:06:56.376 08:47:12 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:06:56.376 08:47:12 -- spdk/autotest.sh@75 -- # freebsd_set_maxsock_buf 00:06:56.376 08:47:12 -- common/autotest_common.sh@1471 -- # uname 00:06:56.376 08:47:12 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:06:56.376 08:47:12 -- spdk/autotest.sh@80 -- # grep CC_TYPE mk/cc.mk 00:06:56.376 08:47:12 -- spdk/autotest.sh@80 -- # CC_TYPE=CC_TYPE=gcc 00:06:56.376 08:47:12 -- spdk/autotest.sh@81 -- # hash lcov 00:06:56.376 08:47:12 -- spdk/autotest.sh@81 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:56.376 08:47:12 -- spdk/autotest.sh@89 -- # export 'LCOV_OPTS= 00:06:56.376 --rc lcov_branch_coverage=1 00:06:56.376 --rc lcov_function_coverage=1 00:06:56.376 --rc genhtml_branch_coverage=1 00:06:56.376 --rc genhtml_function_coverage=1 00:06:56.376 --rc genhtml_legend=1 00:06:56.376 --rc geninfo_all_blocks=1 00:06:56.376 ' 00:06:56.376 08:47:12 -- spdk/autotest.sh@89 -- # LCOV_OPTS=' 00:06:56.376 --rc lcov_branch_coverage=1 00:06:56.376 --rc lcov_function_coverage=1 00:06:56.376 --rc genhtml_branch_coverage=1 00:06:56.376 --rc genhtml_function_coverage=1 00:06:56.376 --rc genhtml_legend=1 00:06:56.376 --rc geninfo_all_blocks=1 00:06:56.376 ' 00:06:56.376 08:47:12 -- spdk/autotest.sh@90 -- # export 'LCOV=lcov 00:06:56.376 --rc lcov_branch_coverage=1 00:06:56.376 --rc lcov_function_coverage=1 00:06:56.376 --rc genhtml_branch_coverage=1 00:06:56.376 --rc genhtml_function_coverage=1 00:06:56.376 --rc genhtml_legend=1 00:06:56.376 --rc geninfo_all_blocks=1 00:06:56.376 --no-external' 00:06:56.376 08:47:12 -- spdk/autotest.sh@90 -- # LCOV='lcov 00:06:56.376 --rc lcov_branch_coverage=1 00:06:56.376 --rc lcov_function_coverage=1 00:06:56.376 --rc genhtml_branch_coverage=1 00:06:56.376 --rc genhtml_function_coverage=1 00:06:56.376 --rc genhtml_legend=1 00:06:56.376 --rc geninfo_all_blocks=1 00:06:56.376 --no-external' 00:06:56.376 08:47:12 -- spdk/autotest.sh@92 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:56.634 lcov: LCOV version 1.14 00:06:56.634 08:47:12 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:06.608 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:07:06.608 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:07:06.608 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:07:06.608 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:07:06.608 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:07:06.608 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:07:13.173 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:13.173 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:25.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:07:25.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:07:25.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:07:25.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:07:25.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:07:25.380 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:07:25.380 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:07:25.381 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:07:25.381 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:07:25.382 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:07:25.639 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:07:25.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:07:25.639 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:07:25.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:07:25.639 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:07:25.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:07:25.639 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:07:25.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:07:25.639 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:07:25.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:07:25.639 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:07:25.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:07:25.639 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:07:25.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:07:25.639 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:07:25.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:07:25.639 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:07:25.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:07:25.640 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:07:25.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:07:29.856 08:47:45 -- spdk/autotest.sh@98 -- # timing_enter pre_cleanup 00:07:29.856 08:47:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:29.856 08:47:45 -- common/autotest_common.sh@10 -- # set +x 00:07:29.856 08:47:45 -- spdk/autotest.sh@100 -- # rm -f 00:07:29.856 08:47:45 -- spdk/autotest.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:29.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:29.856 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:29.856 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:29.856 08:47:46 -- spdk/autotest.sh@105 -- # get_zoned_devs 00:07:29.856 08:47:46 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:07:29.856 08:47:46 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:07:29.856 08:47:46 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:07:29.856 08:47:46 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:29.856 08:47:46 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:07:29.856 08:47:46 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:07:29.856 08:47:46 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:29.856 08:47:46 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:29.856 08:47:46 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:29.856 08:47:46 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:07:29.856 08:47:46 -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:07:29.856 08:47:46 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:07:29.856 08:47:46 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:29.856 08:47:46 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:29.856 08:47:46 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:07:29.856 08:47:46 -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:07:29.856 08:47:46 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:07:29.856 08:47:46 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:29.856 08:47:46 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:29.856 08:47:46 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:07:29.857 08:47:46 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:07:29.857 08:47:46 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:29.857 08:47:46 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:29.857 08:47:46 -- spdk/autotest.sh@107 -- # (( 0 > 0 )) 00:07:29.857 08:47:46 -- spdk/autotest.sh@119 -- # for dev in /dev/nvme*n!(*p*) 00:07:29.857 08:47:46 -- spdk/autotest.sh@121 -- # [[ -z '' ]] 00:07:29.857 08:47:46 -- spdk/autotest.sh@122 -- # block_in_use /dev/nvme0n1 00:07:29.857 08:47:46 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:07:29.857 08:47:46 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:30.115 No valid GPT data, bailing 00:07:30.115 08:47:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:30.115 08:47:46 -- scripts/common.sh@391 -- # pt= 00:07:30.115 08:47:46 -- scripts/common.sh@392 -- # return 1 00:07:30.115 08:47:46 -- spdk/autotest.sh@123 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:30.115 1+0 records in 00:07:30.115 1+0 records out 00:07:30.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451951 s, 232 MB/s 00:07:30.115 08:47:46 -- spdk/autotest.sh@119 -- # for dev in /dev/nvme*n!(*p*) 00:07:30.115 08:47:46 -- spdk/autotest.sh@121 -- # [[ -z '' ]] 00:07:30.115 08:47:46 -- spdk/autotest.sh@122 -- # block_in_use /dev/nvme0n2 00:07:30.115 08:47:46 -- scripts/common.sh@378 -- # local block=/dev/nvme0n2 pt 00:07:30.115 08:47:46 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:07:30.115 No valid GPT data, bailing 00:07:30.115 08:47:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:07:30.115 08:47:46 -- scripts/common.sh@391 -- # pt= 00:07:30.115 08:47:46 -- scripts/common.sh@392 -- # return 1 00:07:30.115 08:47:46 -- spdk/autotest.sh@123 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:07:30.115 1+0 records in 00:07:30.115 1+0 records out 00:07:30.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0042565 s, 246 MB/s 00:07:30.115 08:47:46 -- spdk/autotest.sh@119 -- # for dev in /dev/nvme*n!(*p*) 00:07:30.115 08:47:46 -- spdk/autotest.sh@121 -- # [[ -z '' ]] 00:07:30.115 08:47:46 -- spdk/autotest.sh@122 -- # block_in_use /dev/nvme0n3 00:07:30.115 08:47:46 -- scripts/common.sh@378 -- # local block=/dev/nvme0n3 pt 00:07:30.115 08:47:46 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:07:30.115 No valid GPT data, bailing 00:07:30.115 08:47:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:07:30.115 08:47:46 -- scripts/common.sh@391 -- # pt= 00:07:30.115 08:47:46 -- scripts/common.sh@392 -- # return 1 00:07:30.115 08:47:46 -- spdk/autotest.sh@123 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:07:30.115 1+0 records in 00:07:30.115 1+0 records out 00:07:30.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448373 s, 234 MB/s 00:07:30.115 08:47:46 -- spdk/autotest.sh@119 -- # for dev in /dev/nvme*n!(*p*) 00:07:30.115 08:47:46 -- spdk/autotest.sh@121 -- # [[ -z '' ]] 00:07:30.115 08:47:46 -- spdk/autotest.sh@122 -- # block_in_use /dev/nvme1n1 00:07:30.115 08:47:46 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:07:30.115 08:47:46 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:30.115 No valid GPT data, bailing 00:07:30.374 08:47:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:30.374 08:47:46 -- scripts/common.sh@391 -- # pt= 00:07:30.374 08:47:46 -- scripts/common.sh@392 -- # return 1 00:07:30.374 08:47:46 -- spdk/autotest.sh@123 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:30.374 1+0 records in 00:07:30.374 1+0 records out 00:07:30.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426953 s, 246 MB/s 00:07:30.374 08:47:46 -- spdk/autotest.sh@127 -- # sync 00:07:30.374 08:47:46 -- spdk/autotest.sh@129 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:30.374 08:47:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:30.374 08:47:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:32.278 08:47:48 -- spdk/autotest.sh@133 -- # uname -s 00:07:32.278 08:47:48 -- spdk/autotest.sh@133 -- # '[' Linux = Linux ']' 00:07:32.278 08:47:48 -- spdk/autotest.sh@134 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:07:32.278 08:47:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:32.278 08:47:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.278 08:47:48 -- common/autotest_common.sh@10 -- # set +x 00:07:32.278 ************************************ 00:07:32.278 START TEST setup.sh 00:07:32.278 ************************************ 00:07:32.278 08:47:48 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:07:32.278 * Looking for test storage... 00:07:32.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:32.278 08:47:48 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:07:32.278 08:47:48 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:07:32.278 08:47:48 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:07:32.278 08:47:48 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:32.278 08:47:48 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.278 08:47:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:32.278 ************************************ 00:07:32.278 START TEST acl 00:07:32.278 ************************************ 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:07:32.278 * Looking for test storage... 00:07:32.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:32.278 08:47:48 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:32.278 08:47:48 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:32.278 08:47:48 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:07:32.278 08:47:48 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:07:32.278 08:47:48 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:07:32.278 08:47:48 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:07:32.278 08:47:48 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:07:32.278 08:47:48 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:32.278 08:47:48 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:33.214 08:47:49 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:07:33.214 08:47:49 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:07:33.214 08:47:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:33.214 08:47:49 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:07:33.214 08:47:49 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:07:33.214 08:47:49 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:33.781 Hugepages 00:07:33.781 node hugesize free / total 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:33.781 00:07:33.781 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:33.781 08:47:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:34.057 08:47:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:07:34.057 08:47:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:34.057 08:47:50 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:34.057 08:47:50 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:34.057 08:47:50 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:34.057 08:47:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:34.057 08:47:50 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:07:34.057 08:47:50 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:07:34.057 08:47:50 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:34.057 08:47:50 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.057 08:47:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:34.057 ************************************ 00:07:34.057 START TEST denied 00:07:34.057 ************************************ 00:07:34.057 08:47:50 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:07:34.057 08:47:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:07:34.057 08:47:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:07:34.057 08:47:50 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:07:34.057 08:47:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:07:34.057 08:47:50 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:35.025 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:07:35.025 08:47:50 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:07:35.025 08:47:50 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:07:35.025 08:47:50 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:07:35.025 08:47:50 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:07:35.025 08:47:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:07:35.025 08:47:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:35.025 08:47:50 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:35.025 08:47:50 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:07:35.025 08:47:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:35.025 08:47:50 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:35.284 00:07:35.284 real 0m1.442s 00:07:35.284 user 0m0.612s 00:07:35.284 sys 0m0.748s 00:07:35.284 08:47:51 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.284 08:47:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:07:35.284 ************************************ 00:07:35.284 END TEST denied 00:07:35.284 ************************************ 00:07:35.542 08:47:51 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:07:35.542 08:47:51 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:35.542 08:47:51 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.542 08:47:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:35.542 ************************************ 00:07:35.542 START TEST allowed 00:07:35.542 ************************************ 00:07:35.542 08:47:51 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:07:35.542 08:47:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:07:35.542 08:47:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:07:35.542 08:47:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:07:35.542 08:47:51 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:35.542 08:47:51 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:07:36.109 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.109 08:47:52 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:07:36.109 08:47:52 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:07:36.109 08:47:52 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:07:36.109 08:47:52 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:07:36.109 08:47:52 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:07:36.109 08:47:52 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:36.109 08:47:52 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:36.109 08:47:52 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:07:36.109 08:47:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:36.109 08:47:52 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:37.047 00:07:37.047 real 0m1.504s 00:07:37.047 user 0m0.684s 00:07:37.047 sys 0m0.815s 00:07:37.047 08:47:53 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.047 08:47:53 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:07:37.047 ************************************ 00:07:37.047 END TEST allowed 00:07:37.047 ************************************ 00:07:37.047 00:07:37.047 real 0m4.735s 00:07:37.047 user 0m2.161s 00:07:37.047 sys 0m2.487s 00:07:37.047 08:47:53 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.047 08:47:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:37.047 ************************************ 00:07:37.047 END TEST acl 00:07:37.047 ************************************ 00:07:37.047 08:47:53 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:37.047 08:47:53 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:37.047 08:47:53 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.047 08:47:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:37.047 ************************************ 00:07:37.047 START TEST hugepages 00:07:37.047 ************************************ 00:07:37.047 08:47:53 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:37.047 * Looking for test storage... 00:07:37.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5455744 kB' 'MemAvailable: 7382844 kB' 'Buffers: 2436 kB' 'Cached: 2137216 kB' 'SwapCached: 0 kB' 'Active: 873040 kB' 'Inactive: 1369836 kB' 'Active(anon): 113712 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 105228 kB' 'Mapped: 48464 kB' 'Shmem: 10488 kB' 'KReclaimable: 70196 kB' 'Slab: 144532 kB' 'SReclaimable: 70196 kB' 'SUnreclaim: 74336 kB' 'KernelStack: 6492 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 333660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.047 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.048 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:37.049 08:47:53 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:07:37.049 08:47:53 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:37.049 08:47:53 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.049 08:47:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:37.049 ************************************ 00:07:37.049 START TEST default_setup 00:07:37.049 ************************************ 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:07:37.049 08:47:53 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:38.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:38.019 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.019 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7549028 kB' 'MemAvailable: 9475992 kB' 'Buffers: 2436 kB' 'Cached: 2137204 kB' 'SwapCached: 0 kB' 'Active: 890044 kB' 'Inactive: 1369844 kB' 'Active(anon): 130716 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121840 kB' 'Mapped: 48648 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144224 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74316 kB' 'KernelStack: 6416 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.019 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.020 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548528 kB' 'MemAvailable: 9475492 kB' 'Buffers: 2436 kB' 'Cached: 2137204 kB' 'SwapCached: 0 kB' 'Active: 889856 kB' 'Inactive: 1369844 kB' 'Active(anon): 130528 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121668 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144224 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74316 kB' 'KernelStack: 6400 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.021 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.022 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548528 kB' 'MemAvailable: 9475492 kB' 'Buffers: 2436 kB' 'Cached: 2137204 kB' 'SwapCached: 0 kB' 'Active: 889496 kB' 'Inactive: 1369844 kB' 'Active(anon): 130168 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121280 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144216 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74308 kB' 'KernelStack: 6416 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.023 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.024 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:38.025 nr_hugepages=1024 00:07:38.025 resv_hugepages=0 00:07:38.025 surplus_hugepages=0 00:07:38.025 anon_hugepages=0 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.025 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548528 kB' 'MemAvailable: 9475492 kB' 'Buffers: 2436 kB' 'Cached: 2137204 kB' 'SwapCached: 0 kB' 'Active: 889736 kB' 'Inactive: 1369844 kB' 'Active(anon): 130408 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121520 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144212 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74304 kB' 'KernelStack: 6416 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.026 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.027 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.028 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548528 kB' 'MemUsed: 4693444 kB' 'SwapCached: 0 kB' 'Active: 889640 kB' 'Inactive: 1369844 kB' 'Active(anon): 130312 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 2139640 kB' 'Mapped: 48508 kB' 'AnonPages: 121416 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69908 kB' 'Slab: 144212 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:38.028 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.288 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:38.289 node0=1024 expecting 1024 00:07:38.289 ************************************ 00:07:38.289 END TEST default_setup 00:07:38.289 ************************************ 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:38.289 00:07:38.289 real 0m1.018s 00:07:38.289 user 0m0.478s 00:07:38.289 sys 0m0.477s 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.289 08:47:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:07:38.289 08:47:54 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:07:38.289 08:47:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:38.289 08:47:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.289 08:47:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:38.289 ************************************ 00:07:38.289 START TEST per_node_1G_alloc 00:07:38.289 ************************************ 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:38.289 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:38.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:38.552 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:38.552 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:38.552 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:07:38.552 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:07:38.552 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:38.552 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:38.552 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:38.552 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:38.552 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:38.552 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:38.552 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8598680 kB' 'MemAvailable: 10525648 kB' 'Buffers: 2436 kB' 'Cached: 2137204 kB' 'SwapCached: 0 kB' 'Active: 890092 kB' 'Inactive: 1369848 kB' 'Active(anon): 130764 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121836 kB' 'Mapped: 48548 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144220 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74312 kB' 'KernelStack: 6436 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.553 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8598680 kB' 'MemAvailable: 10525648 kB' 'Buffers: 2436 kB' 'Cached: 2137204 kB' 'SwapCached: 0 kB' 'Active: 889704 kB' 'Inactive: 1369848 kB' 'Active(anon): 130376 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121484 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144208 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74300 kB' 'KernelStack: 6400 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.554 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.555 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8598680 kB' 'MemAvailable: 10525648 kB' 'Buffers: 2436 kB' 'Cached: 2137204 kB' 'SwapCached: 0 kB' 'Active: 889708 kB' 'Inactive: 1369848 kB' 'Active(anon): 130380 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121540 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144204 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74296 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.556 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.557 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.823 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:38.824 nr_hugepages=512 00:07:38.824 resv_hugepages=0 00:07:38.824 surplus_hugepages=0 00:07:38.824 anon_hugepages=0 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8598428 kB' 'MemAvailable: 10525396 kB' 'Buffers: 2436 kB' 'Cached: 2137204 kB' 'SwapCached: 0 kB' 'Active: 889696 kB' 'Inactive: 1369848 kB' 'Active(anon): 130368 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121472 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144200 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74292 kB' 'KernelStack: 6400 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.824 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.825 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8598428 kB' 'MemUsed: 3643544 kB' 'SwapCached: 0 kB' 'Active: 889736 kB' 'Inactive: 1369848 kB' 'Active(anon): 130408 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2139640 kB' 'Mapped: 48504 kB' 'AnonPages: 121556 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69908 kB' 'Slab: 144200 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.826 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.827 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:38.828 node0=512 expecting 512 00:07:38.828 ************************************ 00:07:38.828 END TEST per_node_1G_alloc 00:07:38.828 ************************************ 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:38.828 00:07:38.828 real 0m0.548s 00:07:38.828 user 0m0.277s 00:07:38.828 sys 0m0.273s 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.828 08:47:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:38.828 08:47:54 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:07:38.828 08:47:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:38.828 08:47:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.828 08:47:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:38.828 ************************************ 00:07:38.828 START TEST even_2G_alloc 00:07:38.828 ************************************ 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:38.828 08:47:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:39.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.094 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:39.094 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7555392 kB' 'MemAvailable: 9482364 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 890452 kB' 'Inactive: 1369852 kB' 'Active(anon): 131124 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 48572 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144288 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74380 kB' 'KernelStack: 6404 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.094 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7555392 kB' 'MemAvailable: 9482364 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889752 kB' 'Inactive: 1369852 kB' 'Active(anon): 130424 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121560 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144300 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74392 kB' 'KernelStack: 6400 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.095 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.096 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.365 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.366 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7555392 kB' 'MemAvailable: 9482364 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889916 kB' 'Inactive: 1369852 kB' 'Active(anon): 130588 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121712 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144296 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74388 kB' 'KernelStack: 6384 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.367 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:39.368 nr_hugepages=1024 00:07:39.368 resv_hugepages=0 00:07:39.368 surplus_hugepages=0 00:07:39.368 anon_hugepages=0 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:39.368 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7555392 kB' 'MemAvailable: 9482364 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 890400 kB' 'Inactive: 1369852 kB' 'Active(anon): 131072 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122232 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144296 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74388 kB' 'KernelStack: 6448 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.369 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.370 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7555392 kB' 'MemUsed: 4686580 kB' 'SwapCached: 0 kB' 'Active: 889724 kB' 'Inactive: 1369852 kB' 'Active(anon): 130396 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2139644 kB' 'Mapped: 48508 kB' 'AnonPages: 121532 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69908 kB' 'Slab: 144284 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.371 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:39.372 node0=1024 expecting 1024 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:39.372 ************************************ 00:07:39.372 END TEST even_2G_alloc 00:07:39.372 ************************************ 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:39.372 00:07:39.372 real 0m0.535s 00:07:39.372 user 0m0.271s 00:07:39.372 sys 0m0.270s 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.372 08:47:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:39.372 08:47:55 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:07:39.372 08:47:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:39.372 08:47:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.372 08:47:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:39.372 ************************************ 00:07:39.372 START TEST odd_alloc 00:07:39.372 ************************************ 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:39.372 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:39.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.638 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:39.638 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.638 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.639 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7558588 kB' 'MemAvailable: 9485560 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889844 kB' 'Inactive: 1369852 kB' 'Active(anon): 130516 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121704 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144260 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74352 kB' 'KernelStack: 6420 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:39.639 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.639 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.639 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.963 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.964 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7558340 kB' 'MemAvailable: 9485312 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889720 kB' 'Inactive: 1369852 kB' 'Active(anon): 130392 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121504 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144264 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74356 kB' 'KernelStack: 6400 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.965 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.966 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7558340 kB' 'MemAvailable: 9485312 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889720 kB' 'Inactive: 1369852 kB' 'Active(anon): 130392 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121496 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144260 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74352 kB' 'KernelStack: 6384 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.967 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:39.968 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:07:39.968 nr_hugepages=1025 00:07:39.969 resv_hugepages=0 00:07:39.969 surplus_hugepages=0 00:07:39.969 anon_hugepages=0 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7558340 kB' 'MemAvailable: 9485312 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889784 kB' 'Inactive: 1369852 kB' 'Active(anon): 130456 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121564 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144216 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74308 kB' 'KernelStack: 6416 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.969 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:39.970 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7558340 kB' 'MemUsed: 4683632 kB' 'SwapCached: 0 kB' 'Active: 889416 kB' 'Inactive: 1369852 kB' 'Active(anon): 130088 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2139644 kB' 'Mapped: 48508 kB' 'AnonPages: 121480 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69908 kB' 'Slab: 144212 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.971 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:39.972 node0=1025 expecting 1025 00:07:39.972 ************************************ 00:07:39.972 END TEST odd_alloc 00:07:39.972 ************************************ 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:07:39.972 00:07:39.972 real 0m0.553s 00:07:39.972 user 0m0.270s 00:07:39.972 sys 0m0.288s 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.972 08:47:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:39.972 08:47:56 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:07:39.972 08:47:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:39.972 08:47:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.972 08:47:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:39.972 ************************************ 00:07:39.972 START TEST custom_alloc 00:07:39.972 ************************************ 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:39.972 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:40.239 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:40.239 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:40.239 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:40.504 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8605172 kB' 'MemAvailable: 10532144 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 890464 kB' 'Inactive: 1369852 kB' 'Active(anon): 131136 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122004 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144188 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74280 kB' 'KernelStack: 6420 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.505 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8605172 kB' 'MemAvailable: 10532144 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889756 kB' 'Inactive: 1369852 kB' 'Active(anon): 130428 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 121584 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144188 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74280 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.506 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.507 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8605172 kB' 'MemAvailable: 10532144 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889772 kB' 'Inactive: 1369852 kB' 'Active(anon): 130444 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 121588 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144176 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74268 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.508 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.509 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:40.510 nr_hugepages=512 00:07:40.510 resv_hugepages=0 00:07:40.510 surplus_hugepages=0 00:07:40.510 anon_hugepages=0 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8605172 kB' 'MemAvailable: 10532144 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889772 kB' 'Inactive: 1369852 kB' 'Active(anon): 130444 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 121592 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144176 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74268 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.510 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.511 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8605172 kB' 'MemUsed: 3636800 kB' 'SwapCached: 0 kB' 'Active: 889496 kB' 'Inactive: 1369852 kB' 'Active(anon): 130168 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2139644 kB' 'Mapped: 48504 kB' 'AnonPages: 121320 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69908 kB' 'Slab: 144176 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.512 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:40.513 node0=512 expecting 512 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:40.513 00:07:40.513 real 0m0.532s 00:07:40.513 user 0m0.247s 00:07:40.513 sys 0m0.294s 00:07:40.513 ************************************ 00:07:40.513 END TEST custom_alloc 00:07:40.513 ************************************ 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.513 08:47:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:40.513 08:47:56 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:40.513 08:47:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:40.513 08:47:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.513 08:47:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:40.513 ************************************ 00:07:40.513 START TEST no_shrink_alloc 00:07:40.513 ************************************ 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:40.513 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:40.514 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:40.514 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:40.514 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:40.514 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:40.514 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:40.514 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:07:40.514 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:40.514 08:47:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:40.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:41.037 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.037 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7555528 kB' 'MemAvailable: 9482500 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 890512 kB' 'Inactive: 1369852 kB' 'Active(anon): 131184 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 122072 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144316 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74408 kB' 'KernelStack: 6404 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.037 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.038 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7555680 kB' 'MemAvailable: 9482652 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889788 kB' 'Inactive: 1369852 kB' 'Active(anon): 130460 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121600 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144320 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74412 kB' 'KernelStack: 6400 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.039 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:41.040 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7555680 kB' 'MemAvailable: 9482652 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889792 kB' 'Inactive: 1369852 kB' 'Active(anon): 130464 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121596 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144320 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74412 kB' 'KernelStack: 6384 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.041 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.042 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:41.043 nr_hugepages=1024 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:41.043 resv_hugepages=0 00:07:41.043 surplus_hugepages=0 00:07:41.043 anon_hugepages=0 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7555680 kB' 'MemAvailable: 9482652 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889860 kB' 'Inactive: 1369852 kB' 'Active(anon): 130532 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121696 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144324 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74416 kB' 'KernelStack: 6432 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.043 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:41.044 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7555680 kB' 'MemUsed: 4686292 kB' 'SwapCached: 0 kB' 'Active: 889832 kB' 'Inactive: 1369852 kB' 'Active(anon): 130504 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 2139644 kB' 'Mapped: 48504 kB' 'AnonPages: 121620 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69908 kB' 'Slab: 144312 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.045 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:41.046 node0=1024 expecting 1024 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:41.046 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:41.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:41.620 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.620 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:41.620 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7554496 kB' 'MemAvailable: 9481468 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 890380 kB' 'Inactive: 1369852 kB' 'Active(anon): 131052 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 396 kB' 'Writeback: 0 kB' 'AnonPages: 121880 kB' 'Mapped: 48956 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144304 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74396 kB' 'KernelStack: 6404 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.620 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.621 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7554248 kB' 'MemAvailable: 9481220 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889804 kB' 'Inactive: 1369852 kB' 'Active(anon): 130476 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 396 kB' 'Writeback: 0 kB' 'AnonPages: 121604 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144312 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74404 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.622 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.623 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7554248 kB' 'MemAvailable: 9481220 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889740 kB' 'Inactive: 1369852 kB' 'Active(anon): 130412 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 396 kB' 'Writeback: 0 kB' 'AnonPages: 121516 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144312 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74404 kB' 'KernelStack: 6400 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.624 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:41.625 nr_hugepages=1024 00:07:41.625 resv_hugepages=0 00:07:41.625 surplus_hugepages=0 00:07:41.625 anon_hugepages=0 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:41.625 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7554500 kB' 'MemAvailable: 9481472 kB' 'Buffers: 2436 kB' 'Cached: 2137208 kB' 'SwapCached: 0 kB' 'Active: 889784 kB' 'Inactive: 1369852 kB' 'Active(anon): 130456 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 396 kB' 'Writeback: 0 kB' 'AnonPages: 121608 kB' 'Mapped: 48508 kB' 'Shmem: 10464 kB' 'KReclaimable: 69908 kB' 'Slab: 144312 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74404 kB' 'KernelStack: 6416 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.626 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:41.627 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7556036 kB' 'MemUsed: 4685936 kB' 'SwapCached: 0 kB' 'Active: 885016 kB' 'Inactive: 1369852 kB' 'Active(anon): 125688 kB' 'Inactive(anon): 0 kB' 'Active(file): 759328 kB' 'Inactive(file): 1369852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 396 kB' 'Writeback: 0 kB' 'FilePages: 2139644 kB' 'Mapped: 47768 kB' 'AnonPages: 116824 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69908 kB' 'Slab: 144288 kB' 'SReclaimable: 69908 kB' 'SUnreclaim: 74380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.628 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:41.629 node0=1024 expecting 1024 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:41.629 00:07:41.629 real 0m1.085s 00:07:41.629 user 0m0.513s 00:07:41.629 sys 0m0.574s 00:07:41.629 ************************************ 00:07:41.629 END TEST no_shrink_alloc 00:07:41.629 ************************************ 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.629 08:47:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:41.629 08:47:57 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:07:41.629 08:47:57 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:41.629 08:47:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:41.629 08:47:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:41.629 08:47:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:41.629 08:47:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:41.629 08:47:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:41.629 08:47:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:41.629 08:47:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:41.629 ************************************ 00:07:41.629 END TEST hugepages 00:07:41.629 ************************************ 00:07:41.629 00:07:41.629 real 0m4.714s 00:07:41.629 user 0m2.214s 00:07:41.629 sys 0m2.431s 00:07:41.629 08:47:57 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.629 08:47:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:41.889 08:47:57 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:41.889 08:47:57 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:41.889 08:47:57 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.889 08:47:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:41.889 ************************************ 00:07:41.889 START TEST driver 00:07:41.889 ************************************ 00:07:41.889 08:47:57 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:41.889 * Looking for test storage... 00:07:41.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:41.889 08:47:57 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:07:41.889 08:47:57 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:41.889 08:47:57 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:42.457 08:47:58 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:42.457 08:47:58 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:42.457 08:47:58 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.457 08:47:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:42.457 ************************************ 00:07:42.457 START TEST guess_driver 00:07:42.457 ************************************ 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:07:42.457 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:42.457 Looking for driver=uio_pci_generic 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:07:42.457 08:47:58 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:43.024 08:47:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:07:43.024 08:47:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:07:43.024 08:47:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:43.024 08:47:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:43.024 08:47:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:43.024 08:47:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:43.282 08:47:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:43.282 08:47:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:43.283 08:47:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:43.283 08:47:59 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:43.283 08:47:59 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:07:43.283 08:47:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:43.283 08:47:59 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:43.884 00:07:43.884 real 0m1.437s 00:07:43.884 user 0m0.539s 00:07:43.884 sys 0m0.884s 00:07:43.884 08:47:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.884 ************************************ 00:07:43.884 END TEST guess_driver 00:07:43.884 08:47:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:07:43.884 ************************************ 00:07:43.884 ************************************ 00:07:43.884 END TEST driver 00:07:43.884 ************************************ 00:07:43.884 00:07:43.884 real 0m2.113s 00:07:43.884 user 0m0.771s 00:07:43.884 sys 0m1.382s 00:07:43.884 08:47:59 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.884 08:47:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:43.884 08:48:00 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:43.884 08:48:00 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:43.884 08:48:00 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.884 08:48:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:43.884 ************************************ 00:07:43.884 START TEST devices 00:07:43.884 ************************************ 00:07:43.884 08:48:00 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:43.884 * Looking for test storage... 00:07:43.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:43.884 08:48:00 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:43.884 08:48:00 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:07:43.884 08:48:00 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:43.884 08:48:00 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:44.825 08:48:00 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:44.825 08:48:00 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:44.825 08:48:00 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:07:44.825 No valid GPT data, bailing 00:07:44.825 08:48:00 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:44.825 08:48:00 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:44.825 08:48:00 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:44.825 08:48:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:44.825 08:48:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:44.825 08:48:00 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:07:44.825 08:48:00 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:07:44.825 08:48:00 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:07:44.825 No valid GPT data, bailing 00:07:44.825 08:48:00 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:07:44.825 08:48:00 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:44.825 08:48:00 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:07:44.825 08:48:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:07:44.825 08:48:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:07:44.825 08:48:00 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:44.825 08:48:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:07:44.826 08:48:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:44.826 08:48:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:07:44.826 08:48:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:44.826 08:48:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:07:44.826 08:48:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:44.826 08:48:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:07:44.826 08:48:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:07:44.826 08:48:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:07:45.090 No valid GPT data, bailing 00:07:45.090 08:48:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:07:45.090 08:48:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:45.090 08:48:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:07:45.090 08:48:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:07:45.090 08:48:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:07:45.090 08:48:01 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:07:45.090 08:48:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:07:45.090 08:48:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:07:45.090 No valid GPT data, bailing 00:07:45.090 08:48:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:45.090 08:48:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:45.090 08:48:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:07:45.090 08:48:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:07:45.090 08:48:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:07:45.090 08:48:01 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:45.090 08:48:01 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:45.090 08:48:01 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:45.090 08:48:01 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.090 08:48:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:45.090 ************************************ 00:07:45.090 START TEST nvme_mount 00:07:45.090 ************************************ 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:45.090 08:48:01 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:46.056 Creating new GPT entries in memory. 00:07:46.056 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:46.056 other utilities. 00:07:46.056 08:48:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:46.056 08:48:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:46.056 08:48:02 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:46.056 08:48:02 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:46.056 08:48:02 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:47.008 Creating new GPT entries in memory. 00:07:47.008 The operation has completed successfully. 00:07:47.008 08:48:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:47.008 08:48:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:47.008 08:48:03 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58259 00:07:47.008 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:47.008 08:48:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:07:47.008 08:48:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:47.266 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.524 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:47.524 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.524 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:47.524 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:47.782 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:47.782 08:48:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:48.040 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:48.040 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:48.040 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:48.040 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:48.041 08:48:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:48.299 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:48.299 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:48.299 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:48.299 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.299 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:48.299 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.299 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:48.299 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.299 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:48.299 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:48.557 08:48:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:48.815 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:48.815 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:48.815 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:48.815 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.815 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:48.815 08:48:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.815 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:48.815 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:49.073 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:49.073 00:07:49.073 real 0m4.008s 00:07:49.073 user 0m0.696s 00:07:49.073 sys 0m1.052s 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.073 08:48:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:07:49.073 ************************************ 00:07:49.073 END TEST nvme_mount 00:07:49.073 ************************************ 00:07:49.073 08:48:05 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:49.073 08:48:05 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:49.073 08:48:05 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.073 08:48:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:49.073 ************************************ 00:07:49.073 START TEST dm_mount 00:07:49.073 ************************************ 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:49.073 08:48:05 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:50.448 Creating new GPT entries in memory. 00:07:50.448 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:50.448 other utilities. 00:07:50.448 08:48:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:50.448 08:48:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:50.448 08:48:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:50.448 08:48:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:50.448 08:48:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:51.385 Creating new GPT entries in memory. 00:07:51.385 The operation has completed successfully. 00:07:51.385 08:48:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:51.385 08:48:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:51.385 08:48:07 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:51.385 08:48:07 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:51.385 08:48:07 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:07:52.319 The operation has completed successfully. 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 58687 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:52.319 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:52.577 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:52.836 08:48:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:52.836 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:52.836 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:52.836 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:52.836 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:52.836 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:52.836 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:53.096 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:53.096 08:48:09 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:53.356 00:07:53.356 real 0m4.109s 00:07:53.356 user 0m0.417s 00:07:53.356 sys 0m0.641s 00:07:53.356 08:48:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.356 08:48:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:53.356 ************************************ 00:07:53.356 END TEST dm_mount 00:07:53.356 ************************************ 00:07:53.356 08:48:09 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:53.356 08:48:09 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:53.356 08:48:09 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:53.356 08:48:09 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:53.356 08:48:09 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:53.356 08:48:09 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:53.356 08:48:09 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:53.616 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:53.616 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:53.616 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:53.616 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:53.616 08:48:09 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:07:53.616 08:48:09 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:53.616 08:48:09 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:53.616 08:48:09 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:53.616 08:48:09 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:53.616 08:48:09 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:53.616 08:48:09 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:53.616 00:07:53.616 real 0m9.629s 00:07:53.616 user 0m1.754s 00:07:53.616 sys 0m2.281s 00:07:53.616 08:48:09 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.616 08:48:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:53.616 ************************************ 00:07:53.616 END TEST devices 00:07:53.616 ************************************ 00:07:53.616 ************************************ 00:07:53.616 END TEST setup.sh 00:07:53.616 ************************************ 00:07:53.616 00:07:53.616 real 0m21.464s 00:07:53.616 user 0m6.988s 00:07:53.616 sys 0m8.752s 00:07:53.616 08:48:09 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.616 08:48:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:53.616 08:48:09 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:54.181 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:54.181 Hugepages 00:07:54.181 node hugesize free / total 00:07:54.181 node0 1048576kB 0 / 0 00:07:54.181 node0 2048kB 2048 / 2048 00:07:54.181 00:07:54.181 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:54.181 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:54.440 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:54.440 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:07:54.440 08:48:10 -- spdk/autotest.sh@139 -- # uname -s 00:07:54.440 08:48:10 -- spdk/autotest.sh@139 -- # [[ Linux == Linux ]] 00:07:54.440 08:48:10 -- spdk/autotest.sh@141 -- # nvme_namespace_revert 00:07:54.440 08:48:10 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:55.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:55.267 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.267 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.267 08:48:11 -- common/autotest_common.sh@1528 -- # sleep 1 00:07:56.201 08:48:12 -- common/autotest_common.sh@1529 -- # bdfs=() 00:07:56.201 08:48:12 -- common/autotest_common.sh@1529 -- # local bdfs 00:07:56.201 08:48:12 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:07:56.201 08:48:12 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:07:56.201 08:48:12 -- common/autotest_common.sh@1509 -- # bdfs=() 00:07:56.201 08:48:12 -- common/autotest_common.sh@1509 -- # local bdfs 00:07:56.201 08:48:12 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:56.201 08:48:12 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:56.201 08:48:12 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:07:56.461 08:48:12 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:07:56.461 08:48:12 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:56.461 08:48:12 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:56.721 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:56.721 Waiting for block devices as requested 00:07:56.980 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:56.980 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:56.980 08:48:13 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:07:56.980 08:48:13 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:56.980 08:48:13 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:56.980 08:48:13 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:07:56.980 08:48:13 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:56.980 08:48:13 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:56.980 08:48:13 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:56.980 08:48:13 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:07:56.980 08:48:13 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:07:56.980 08:48:13 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:07:56.980 08:48:13 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:07:56.980 08:48:13 -- common/autotest_common.sh@1541 -- # grep oacs 00:07:56.980 08:48:13 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:07:56.980 08:48:13 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:07:56.980 08:48:13 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:07:56.980 08:48:13 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:07:56.980 08:48:13 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:07:56.980 08:48:13 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:07:56.980 08:48:13 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:07:56.980 08:48:13 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:07:56.980 08:48:13 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:07:56.980 08:48:13 -- common/autotest_common.sh@1553 -- # continue 00:07:56.981 08:48:13 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:07:56.981 08:48:13 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:56.981 08:48:13 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:56.981 08:48:13 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:07:56.981 08:48:13 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:56.981 08:48:13 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:56.981 08:48:13 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:56.981 08:48:13 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:07:56.981 08:48:13 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:07:56.981 08:48:13 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:07:56.981 08:48:13 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:07:56.981 08:48:13 -- common/autotest_common.sh@1541 -- # grep oacs 00:07:56.981 08:48:13 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:07:56.981 08:48:13 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:07:56.981 08:48:13 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:07:56.981 08:48:13 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:07:56.981 08:48:13 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:07:56.981 08:48:13 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:07:56.981 08:48:13 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:07:56.981 08:48:13 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:07:56.981 08:48:13 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:07:56.981 08:48:13 -- common/autotest_common.sh@1553 -- # continue 00:07:56.981 08:48:13 -- spdk/autotest.sh@144 -- # timing_exit pre_cleanup 00:07:56.981 08:48:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.981 08:48:13 -- common/autotest_common.sh@10 -- # set +x 00:07:57.239 08:48:13 -- spdk/autotest.sh@147 -- # timing_enter afterboot 00:07:57.239 08:48:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:57.239 08:48:13 -- common/autotest_common.sh@10 -- # set +x 00:07:57.239 08:48:13 -- spdk/autotest.sh@148 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:57.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:57.805 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.805 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:58.063 08:48:14 -- spdk/autotest.sh@149 -- # timing_exit afterboot 00:07:58.063 08:48:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.063 08:48:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.063 08:48:14 -- spdk/autotest.sh@153 -- # opal_revert_cleanup 00:07:58.063 08:48:14 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:07:58.063 08:48:14 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:07:58.063 08:48:14 -- common/autotest_common.sh@1573 -- # bdfs=() 00:07:58.063 08:48:14 -- common/autotest_common.sh@1573 -- # local bdfs 00:07:58.063 08:48:14 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:07:58.063 08:48:14 -- common/autotest_common.sh@1509 -- # bdfs=() 00:07:58.063 08:48:14 -- common/autotest_common.sh@1509 -- # local bdfs 00:07:58.063 08:48:14 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:58.063 08:48:14 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:58.063 08:48:14 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:07:58.063 08:48:14 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:07:58.063 08:48:14 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:58.063 08:48:14 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:07:58.063 08:48:14 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:58.063 08:48:14 -- common/autotest_common.sh@1576 -- # device=0x0010 00:07:58.063 08:48:14 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:58.063 08:48:14 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:07:58.063 08:48:14 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:58.063 08:48:14 -- common/autotest_common.sh@1576 -- # device=0x0010 00:07:58.063 08:48:14 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:58.063 08:48:14 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:07:58.063 08:48:14 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:07:58.063 08:48:14 -- common/autotest_common.sh@1589 -- # return 0 00:07:58.063 08:48:14 -- spdk/autotest.sh@159 -- # '[' 0 -eq 1 ']' 00:07:58.063 08:48:14 -- spdk/autotest.sh@163 -- # '[' 1 -eq 1 ']' 00:07:58.063 08:48:14 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:07:58.063 08:48:14 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:07:58.063 08:48:14 -- spdk/autotest.sh@171 -- # timing_enter lib 00:07:58.063 08:48:14 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:58.063 08:48:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.063 08:48:14 -- spdk/autotest.sh@173 -- # [[ 0 -eq 1 ]] 00:07:58.063 08:48:14 -- spdk/autotest.sh@177 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:58.063 08:48:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:58.063 08:48:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.063 08:48:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.063 ************************************ 00:07:58.063 START TEST env 00:07:58.063 ************************************ 00:07:58.063 08:48:14 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:58.063 * Looking for test storage... 00:07:58.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:58.063 08:48:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:58.063 08:48:14 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:58.064 08:48:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.064 08:48:14 env -- common/autotest_common.sh@10 -- # set +x 00:07:58.064 ************************************ 00:07:58.064 START TEST env_memory 00:07:58.064 ************************************ 00:07:58.064 08:48:14 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:58.323 00:07:58.323 00:07:58.323 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.323 http://cunit.sourceforge.net/ 00:07:58.323 00:07:58.323 00:07:58.323 Suite: memory 00:07:58.323 Test: alloc and free memory map ...[2024-05-15 08:48:14.330736] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:58.323 passed 00:07:58.323 Test: mem map translation ...[2024-05-15 08:48:14.361879] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:58.323 [2024-05-15 08:48:14.361925] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:58.323 [2024-05-15 08:48:14.361981] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:58.323 [2024-05-15 08:48:14.361991] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:58.323 passed 00:07:58.323 Test: mem map registration ...[2024-05-15 08:48:14.426255] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:58.323 [2024-05-15 08:48:14.426308] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:58.323 passed 00:07:58.323 Test: mem map adjacent registrations ...passed 00:07:58.323 00:07:58.323 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.323 suites 1 1 n/a 0 0 00:07:58.323 tests 4 4 4 0 0 00:07:58.323 asserts 152 152 152 0 n/a 00:07:58.323 00:07:58.323 Elapsed time = 0.214 seconds 00:07:58.323 00:07:58.323 real 0m0.231s 00:07:58.323 user 0m0.215s 00:07:58.323 sys 0m0.013s 00:07:58.323 08:48:14 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.323 08:48:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:58.323 ************************************ 00:07:58.323 END TEST env_memory 00:07:58.323 ************************************ 00:07:58.323 08:48:14 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:58.323 08:48:14 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:58.323 08:48:14 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.323 08:48:14 env -- common/autotest_common.sh@10 -- # set +x 00:07:58.582 ************************************ 00:07:58.582 START TEST env_vtophys 00:07:58.582 ************************************ 00:07:58.582 08:48:14 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:58.582 EAL: lib.eal log level changed from notice to debug 00:07:58.582 EAL: Detected lcore 0 as core 0 on socket 0 00:07:58.582 EAL: Detected lcore 1 as core 0 on socket 0 00:07:58.582 EAL: Detected lcore 2 as core 0 on socket 0 00:07:58.582 EAL: Detected lcore 3 as core 0 on socket 0 00:07:58.582 EAL: Detected lcore 4 as core 0 on socket 0 00:07:58.582 EAL: Detected lcore 5 as core 0 on socket 0 00:07:58.582 EAL: Detected lcore 6 as core 0 on socket 0 00:07:58.582 EAL: Detected lcore 7 as core 0 on socket 0 00:07:58.582 EAL: Detected lcore 8 as core 0 on socket 0 00:07:58.582 EAL: Detected lcore 9 as core 0 on socket 0 00:07:58.582 EAL: Maximum logical cores by configuration: 128 00:07:58.582 EAL: Detected CPU lcores: 10 00:07:58.582 EAL: Detected NUMA nodes: 1 00:07:58.582 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:58.582 EAL: Detected shared linkage of DPDK 00:07:58.582 EAL: No shared files mode enabled, IPC will be disabled 00:07:58.582 EAL: Selected IOVA mode 'PA' 00:07:58.582 EAL: Probing VFIO support... 00:07:58.582 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:58.582 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:58.582 EAL: Ask a virtual area of 0x2e000 bytes 00:07:58.582 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:58.582 EAL: Setting up physically contiguous memory... 00:07:58.582 EAL: Setting maximum number of open files to 524288 00:07:58.582 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:58.582 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:58.582 EAL: Ask a virtual area of 0x61000 bytes 00:07:58.582 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:58.582 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:58.582 EAL: Ask a virtual area of 0x400000000 bytes 00:07:58.582 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:58.582 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:58.582 EAL: Ask a virtual area of 0x61000 bytes 00:07:58.582 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:58.582 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:58.582 EAL: Ask a virtual area of 0x400000000 bytes 00:07:58.582 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:58.582 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:58.582 EAL: Ask a virtual area of 0x61000 bytes 00:07:58.582 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:58.582 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:58.582 EAL: Ask a virtual area of 0x400000000 bytes 00:07:58.582 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:58.582 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:58.582 EAL: Ask a virtual area of 0x61000 bytes 00:07:58.582 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:58.582 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:58.582 EAL: Ask a virtual area of 0x400000000 bytes 00:07:58.582 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:58.582 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:58.582 EAL: Hugepages will be freed exactly as allocated. 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: TSC frequency is ~2200000 KHz 00:07:58.582 EAL: Main lcore 0 is ready (tid=7f09fcd18a00;cpuset=[0]) 00:07:58.582 EAL: Trying to obtain current memory policy. 00:07:58.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.582 EAL: Restoring previous memory policy: 0 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was expanded by 2MB 00:07:58.582 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:58.582 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:58.582 EAL: Mem event callback 'spdk:(nil)' registered 00:07:58.582 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:58.582 00:07:58.582 00:07:58.582 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.582 http://cunit.sourceforge.net/ 00:07:58.582 00:07:58.582 00:07:58.582 Suite: components_suite 00:07:58.582 Test: vtophys_malloc_test ...passed 00:07:58.582 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:58.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.582 EAL: Restoring previous memory policy: 4 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was expanded by 4MB 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was shrunk by 4MB 00:07:58.582 EAL: Trying to obtain current memory policy. 00:07:58.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.582 EAL: Restoring previous memory policy: 4 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was expanded by 6MB 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was shrunk by 6MB 00:07:58.582 EAL: Trying to obtain current memory policy. 00:07:58.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.582 EAL: Restoring previous memory policy: 4 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was expanded by 10MB 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was shrunk by 10MB 00:07:58.582 EAL: Trying to obtain current memory policy. 00:07:58.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.582 EAL: Restoring previous memory policy: 4 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was expanded by 18MB 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was shrunk by 18MB 00:07:58.582 EAL: Trying to obtain current memory policy. 00:07:58.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.582 EAL: Restoring previous memory policy: 4 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was expanded by 34MB 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was shrunk by 34MB 00:07:58.582 EAL: Trying to obtain current memory policy. 00:07:58.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.582 EAL: Restoring previous memory policy: 4 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was expanded by 66MB 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was shrunk by 66MB 00:07:58.582 EAL: Trying to obtain current memory policy. 00:07:58.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.582 EAL: Restoring previous memory policy: 4 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was expanded by 130MB 00:07:58.582 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.582 EAL: request: mp_malloc_sync 00:07:58.582 EAL: No shared files mode enabled, IPC is disabled 00:07:58.582 EAL: Heap on socket 0 was shrunk by 130MB 00:07:58.582 EAL: Trying to obtain current memory policy. 00:07:58.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.841 EAL: Restoring previous memory policy: 4 00:07:58.841 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.841 EAL: request: mp_malloc_sync 00:07:58.841 EAL: No shared files mode enabled, IPC is disabled 00:07:58.841 EAL: Heap on socket 0 was expanded by 258MB 00:07:58.841 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.841 EAL: request: mp_malloc_sync 00:07:58.841 EAL: No shared files mode enabled, IPC is disabled 00:07:58.841 EAL: Heap on socket 0 was shrunk by 258MB 00:07:58.841 EAL: Trying to obtain current memory policy. 00:07:58.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.841 EAL: Restoring previous memory policy: 4 00:07:58.841 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.841 EAL: request: mp_malloc_sync 00:07:58.841 EAL: No shared files mode enabled, IPC is disabled 00:07:58.841 EAL: Heap on socket 0 was expanded by 514MB 00:07:58.841 EAL: Calling mem event callback 'spdk:(nil)' 00:07:59.100 EAL: request: mp_malloc_sync 00:07:59.100 EAL: No shared files mode enabled, IPC is disabled 00:07:59.100 EAL: Heap on socket 0 was shrunk by 514MB 00:07:59.100 EAL: Trying to obtain current memory policy. 00:07:59.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:59.100 EAL: Restoring previous memory policy: 4 00:07:59.100 EAL: Calling mem event callback 'spdk:(nil)' 00:07:59.100 EAL: request: mp_malloc_sync 00:07:59.100 EAL: No shared files mode enabled, IPC is disabled 00:07:59.100 EAL: Heap on socket 0 was expanded by 1026MB 00:07:59.359 EAL: Calling mem event callback 'spdk:(nil)' 00:07:59.359 passed 00:07:59.359 00:07:59.359 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.359 suites 1 1 n/a 0 0 00:07:59.359 tests 2 2 2 0 0 00:07:59.359 asserts 5295 5295 5295 0 n/a 00:07:59.359 00:07:59.359 Elapsed time = 0.700 seconds 00:07:59.359 EAL: request: mp_malloc_sync 00:07:59.359 EAL: No shared files mode enabled, IPC is disabled 00:07:59.359 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:59.359 EAL: Calling mem event callback 'spdk:(nil)' 00:07:59.359 EAL: request: mp_malloc_sync 00:07:59.359 EAL: No shared files mode enabled, IPC is disabled 00:07:59.359 EAL: Heap on socket 0 was shrunk by 2MB 00:07:59.359 EAL: No shared files mode enabled, IPC is disabled 00:07:59.359 EAL: No shared files mode enabled, IPC is disabled 00:07:59.359 EAL: No shared files mode enabled, IPC is disabled 00:07:59.359 00:07:59.359 real 0m0.892s 00:07:59.359 user 0m0.450s 00:07:59.359 sys 0m0.308s 00:07:59.359 ************************************ 00:07:59.359 END TEST env_vtophys 00:07:59.359 ************************************ 00:07:59.359 08:48:15 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.359 08:48:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:59.359 08:48:15 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:59.359 08:48:15 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:59.359 08:48:15 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.359 08:48:15 env -- common/autotest_common.sh@10 -- # set +x 00:07:59.359 ************************************ 00:07:59.359 START TEST env_pci 00:07:59.359 ************************************ 00:07:59.359 08:48:15 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:59.359 00:07:59.359 00:07:59.359 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.359 http://cunit.sourceforge.net/ 00:07:59.359 00:07:59.359 00:07:59.359 Suite: pci 00:07:59.359 Test: pci_hook ...[2024-05-15 08:48:15.514340] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59870 has claimed it 00:07:59.359 passed 00:07:59.359 00:07:59.359 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.359 suites 1 1 n/a 0 0 00:07:59.359 tests 1 1 1 0 0 00:07:59.359 asserts 25 25 25 0 n/a 00:07:59.359 00:07:59.359 Elapsed time = 0.002 seconds 00:07:59.359 EAL: Cannot find device (10000:00:01.0) 00:07:59.359 EAL: Failed to attach device on primary process 00:07:59.359 00:07:59.359 real 0m0.023s 00:07:59.360 user 0m0.014s 00:07:59.360 sys 0m0.008s 00:07:59.360 08:48:15 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.360 ************************************ 00:07:59.360 END TEST env_pci 00:07:59.360 ************************************ 00:07:59.360 08:48:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:59.360 08:48:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:59.360 08:48:15 env -- env/env.sh@15 -- # uname 00:07:59.360 08:48:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:59.360 08:48:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:59.360 08:48:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:59.360 08:48:15 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:59.360 08:48:15 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.360 08:48:15 env -- common/autotest_common.sh@10 -- # set +x 00:07:59.360 ************************************ 00:07:59.360 START TEST env_dpdk_post_init 00:07:59.360 ************************************ 00:07:59.360 08:48:15 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:59.619 EAL: Detected CPU lcores: 10 00:07:59.619 EAL: Detected NUMA nodes: 1 00:07:59.619 EAL: Detected shared linkage of DPDK 00:07:59.619 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:59.619 EAL: Selected IOVA mode 'PA' 00:07:59.619 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:59.619 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:59.619 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:59.619 Starting DPDK initialization... 00:07:59.619 Starting SPDK post initialization... 00:07:59.619 SPDK NVMe probe 00:07:59.619 Attaching to 0000:00:10.0 00:07:59.619 Attaching to 0000:00:11.0 00:07:59.619 Attached to 0000:00:10.0 00:07:59.619 Attached to 0000:00:11.0 00:07:59.619 Cleaning up... 00:07:59.619 00:07:59.619 real 0m0.177s 00:07:59.619 user 0m0.045s 00:07:59.619 sys 0m0.033s 00:07:59.619 08:48:15 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.619 08:48:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 ************************************ 00:07:59.619 END TEST env_dpdk_post_init 00:07:59.619 ************************************ 00:07:59.619 08:48:15 env -- env/env.sh@26 -- # uname 00:07:59.619 08:48:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:59.619 08:48:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:59.619 08:48:15 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:59.619 08:48:15 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.619 08:48:15 env -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 ************************************ 00:07:59.619 START TEST env_mem_callbacks 00:07:59.619 ************************************ 00:07:59.619 08:48:15 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:59.619 EAL: Detected CPU lcores: 10 00:07:59.619 EAL: Detected NUMA nodes: 1 00:07:59.619 EAL: Detected shared linkage of DPDK 00:07:59.619 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:59.619 EAL: Selected IOVA mode 'PA' 00:07:59.877 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:59.877 00:07:59.877 00:07:59.877 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.877 http://cunit.sourceforge.net/ 00:07:59.877 00:07:59.877 00:07:59.877 Suite: memory 00:07:59.877 Test: test ... 00:07:59.877 register 0x200000200000 2097152 00:07:59.877 malloc 3145728 00:07:59.877 register 0x200000400000 4194304 00:07:59.877 buf 0x200000500000 len 3145728 PASSED 00:07:59.877 malloc 64 00:07:59.877 buf 0x2000004fff40 len 64 PASSED 00:07:59.877 malloc 4194304 00:07:59.877 register 0x200000800000 6291456 00:07:59.877 buf 0x200000a00000 len 4194304 PASSED 00:07:59.877 free 0x200000500000 3145728 00:07:59.877 free 0x2000004fff40 64 00:07:59.877 unregister 0x200000400000 4194304 PASSED 00:07:59.877 free 0x200000a00000 4194304 00:07:59.877 unregister 0x200000800000 6291456 PASSED 00:07:59.877 malloc 8388608 00:07:59.877 register 0x200000400000 10485760 00:07:59.877 buf 0x200000600000 len 8388608 PASSED 00:07:59.877 free 0x200000600000 8388608 00:07:59.877 unregister 0x200000400000 10485760 PASSED 00:07:59.877 passed 00:07:59.877 00:07:59.877 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.877 suites 1 1 n/a 0 0 00:07:59.877 tests 1 1 1 0 0 00:07:59.877 asserts 15 15 15 0 n/a 00:07:59.877 00:07:59.877 Elapsed time = 0.007 seconds 00:07:59.877 00:07:59.877 real 0m0.142s 00:07:59.877 user 0m0.020s 00:07:59.877 sys 0m0.021s 00:07:59.877 08:48:15 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.877 08:48:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:59.877 ************************************ 00:07:59.877 END TEST env_mem_callbacks 00:07:59.877 ************************************ 00:07:59.877 ************************************ 00:07:59.877 END TEST env 00:07:59.877 ************************************ 00:07:59.877 00:07:59.877 real 0m1.783s 00:07:59.877 user 0m0.862s 00:07:59.878 sys 0m0.572s 00:07:59.878 08:48:15 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.878 08:48:15 env -- common/autotest_common.sh@10 -- # set +x 00:07:59.878 08:48:16 -- spdk/autotest.sh@178 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:59.878 08:48:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:59.878 08:48:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.878 08:48:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.878 ************************************ 00:07:59.878 START TEST rpc 00:07:59.878 ************************************ 00:07:59.878 08:48:16 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:59.878 * Looking for test storage... 00:07:59.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:59.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.878 08:48:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59985 00:07:59.878 08:48:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:59.878 08:48:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59985 00:07:59.878 08:48:16 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:59.878 08:48:16 rpc -- common/autotest_common.sh@827 -- # '[' -z 59985 ']' 00:07:59.878 08:48:16 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.878 08:48:16 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:59.878 08:48:16 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.878 08:48:16 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:59.878 08:48:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.150 [2024-05-15 08:48:16.164270] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:00.150 [2024-05-15 08:48:16.164393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59985 ] 00:08:00.150 [2024-05-15 08:48:16.300142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.150 [2024-05-15 08:48:16.369484] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:00.150 [2024-05-15 08:48:16.369549] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59985' to capture a snapshot of events at runtime. 00:08:00.150 [2024-05-15 08:48:16.369582] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.150 [2024-05-15 08:48:16.369594] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.150 [2024-05-15 08:48:16.369602] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59985 for offline analysis/debug. 00:08:00.150 [2024-05-15 08:48:16.369631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.419 08:48:16 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:00.419 08:48:16 rpc -- common/autotest_common.sh@860 -- # return 0 00:08:00.419 08:48:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:00.419 08:48:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:00.419 08:48:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:00.419 08:48:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:00.419 08:48:16 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:00.419 08:48:16 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.419 08:48:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.419 ************************************ 00:08:00.419 START TEST rpc_integrity 00:08:00.419 ************************************ 00:08:00.419 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:08:00.419 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:00.419 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.419 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:00.419 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.419 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:00.419 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:00.419 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:00.419 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:00.419 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.419 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:00.419 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.419 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:00.419 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:00.420 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.420 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:00.678 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.678 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:00.678 { 00:08:00.678 "aliases": [ 00:08:00.678 "dca15792-d8cf-4aa4-9135-a149b8d2edf7" 00:08:00.678 ], 00:08:00.678 "assigned_rate_limits": { 00:08:00.678 "r_mbytes_per_sec": 0, 00:08:00.678 "rw_ios_per_sec": 0, 00:08:00.678 "rw_mbytes_per_sec": 0, 00:08:00.678 "w_mbytes_per_sec": 0 00:08:00.678 }, 00:08:00.678 "block_size": 512, 00:08:00.679 "claimed": false, 00:08:00.679 "driver_specific": {}, 00:08:00.679 "memory_domains": [ 00:08:00.679 { 00:08:00.679 "dma_device_id": "system", 00:08:00.679 "dma_device_type": 1 00:08:00.679 }, 00:08:00.679 { 00:08:00.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.679 "dma_device_type": 2 00:08:00.679 } 00:08:00.679 ], 00:08:00.679 "name": "Malloc0", 00:08:00.679 "num_blocks": 16384, 00:08:00.679 "product_name": "Malloc disk", 00:08:00.679 "supported_io_types": { 00:08:00.679 "abort": true, 00:08:00.679 "compare": false, 00:08:00.679 "compare_and_write": false, 00:08:00.679 "flush": true, 00:08:00.679 "nvme_admin": false, 00:08:00.679 "nvme_io": false, 00:08:00.679 "read": true, 00:08:00.679 "reset": true, 00:08:00.679 "unmap": true, 00:08:00.679 "write": true, 00:08:00.679 "write_zeroes": true 00:08:00.679 }, 00:08:00.679 "uuid": "dca15792-d8cf-4aa4-9135-a149b8d2edf7", 00:08:00.679 "zoned": false 00:08:00.679 } 00:08:00.679 ]' 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:00.679 [2024-05-15 08:48:16.709905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:00.679 [2024-05-15 08:48:16.710153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.679 [2024-05-15 08:48:16.710348] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1265da0 00:08:00.679 [2024-05-15 08:48:16.710534] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.679 [2024-05-15 08:48:16.712643] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.679 [2024-05-15 08:48:16.712829] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:00.679 Passthru0 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:00.679 { 00:08:00.679 "aliases": [ 00:08:00.679 "dca15792-d8cf-4aa4-9135-a149b8d2edf7" 00:08:00.679 ], 00:08:00.679 "assigned_rate_limits": { 00:08:00.679 "r_mbytes_per_sec": 0, 00:08:00.679 "rw_ios_per_sec": 0, 00:08:00.679 "rw_mbytes_per_sec": 0, 00:08:00.679 "w_mbytes_per_sec": 0 00:08:00.679 }, 00:08:00.679 "block_size": 512, 00:08:00.679 "claim_type": "exclusive_write", 00:08:00.679 "claimed": true, 00:08:00.679 "driver_specific": {}, 00:08:00.679 "memory_domains": [ 00:08:00.679 { 00:08:00.679 "dma_device_id": "system", 00:08:00.679 "dma_device_type": 1 00:08:00.679 }, 00:08:00.679 { 00:08:00.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.679 "dma_device_type": 2 00:08:00.679 } 00:08:00.679 ], 00:08:00.679 "name": "Malloc0", 00:08:00.679 "num_blocks": 16384, 00:08:00.679 "product_name": "Malloc disk", 00:08:00.679 "supported_io_types": { 00:08:00.679 "abort": true, 00:08:00.679 "compare": false, 00:08:00.679 "compare_and_write": false, 00:08:00.679 "flush": true, 00:08:00.679 "nvme_admin": false, 00:08:00.679 "nvme_io": false, 00:08:00.679 "read": true, 00:08:00.679 "reset": true, 00:08:00.679 "unmap": true, 00:08:00.679 "write": true, 00:08:00.679 "write_zeroes": true 00:08:00.679 }, 00:08:00.679 "uuid": "dca15792-d8cf-4aa4-9135-a149b8d2edf7", 00:08:00.679 "zoned": false 00:08:00.679 }, 00:08:00.679 { 00:08:00.679 "aliases": [ 00:08:00.679 "bb543524-acf6-5180-83f8-763bf6eb1326" 00:08:00.679 ], 00:08:00.679 "assigned_rate_limits": { 00:08:00.679 "r_mbytes_per_sec": 0, 00:08:00.679 "rw_ios_per_sec": 0, 00:08:00.679 "rw_mbytes_per_sec": 0, 00:08:00.679 "w_mbytes_per_sec": 0 00:08:00.679 }, 00:08:00.679 "block_size": 512, 00:08:00.679 "claimed": false, 00:08:00.679 "driver_specific": { 00:08:00.679 "passthru": { 00:08:00.679 "base_bdev_name": "Malloc0", 00:08:00.679 "name": "Passthru0" 00:08:00.679 } 00:08:00.679 }, 00:08:00.679 "memory_domains": [ 00:08:00.679 { 00:08:00.679 "dma_device_id": "system", 00:08:00.679 "dma_device_type": 1 00:08:00.679 }, 00:08:00.679 { 00:08:00.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.679 "dma_device_type": 2 00:08:00.679 } 00:08:00.679 ], 00:08:00.679 "name": "Passthru0", 00:08:00.679 "num_blocks": 16384, 00:08:00.679 "product_name": "passthru", 00:08:00.679 "supported_io_types": { 00:08:00.679 "abort": true, 00:08:00.679 "compare": false, 00:08:00.679 "compare_and_write": false, 00:08:00.679 "flush": true, 00:08:00.679 "nvme_admin": false, 00:08:00.679 "nvme_io": false, 00:08:00.679 "read": true, 00:08:00.679 "reset": true, 00:08:00.679 "unmap": true, 00:08:00.679 "write": true, 00:08:00.679 "write_zeroes": true 00:08:00.679 }, 00:08:00.679 "uuid": "bb543524-acf6-5180-83f8-763bf6eb1326", 00:08:00.679 "zoned": false 00:08:00.679 } 00:08:00.679 ]' 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:00.679 ************************************ 00:08:00.679 END TEST rpc_integrity 00:08:00.679 ************************************ 00:08:00.679 08:48:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:00.679 00:08:00.679 real 0m0.332s 00:08:00.679 user 0m0.221s 00:08:00.679 sys 0m0.033s 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.679 08:48:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:00.938 08:48:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:00.938 08:48:16 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:00.938 08:48:16 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.938 08:48:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.938 ************************************ 00:08:00.938 START TEST rpc_plugins 00:08:00.938 ************************************ 00:08:00.938 08:48:16 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:08:00.938 08:48:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:00.938 08:48:16 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.938 08:48:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:00.938 08:48:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.938 08:48:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:00.938 08:48:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:00.938 08:48:16 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.938 08:48:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:00.938 08:48:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.938 08:48:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:00.938 { 00:08:00.938 "aliases": [ 00:08:00.938 "837fd65f-d714-4f86-afb4-d2f3f83488b8" 00:08:00.938 ], 00:08:00.938 "assigned_rate_limits": { 00:08:00.938 "r_mbytes_per_sec": 0, 00:08:00.938 "rw_ios_per_sec": 0, 00:08:00.938 "rw_mbytes_per_sec": 0, 00:08:00.938 "w_mbytes_per_sec": 0 00:08:00.938 }, 00:08:00.938 "block_size": 4096, 00:08:00.938 "claimed": false, 00:08:00.938 "driver_specific": {}, 00:08:00.938 "memory_domains": [ 00:08:00.938 { 00:08:00.938 "dma_device_id": "system", 00:08:00.938 "dma_device_type": 1 00:08:00.938 }, 00:08:00.938 { 00:08:00.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.938 "dma_device_type": 2 00:08:00.938 } 00:08:00.938 ], 00:08:00.938 "name": "Malloc1", 00:08:00.938 "num_blocks": 256, 00:08:00.938 "product_name": "Malloc disk", 00:08:00.938 "supported_io_types": { 00:08:00.938 "abort": true, 00:08:00.938 "compare": false, 00:08:00.938 "compare_and_write": false, 00:08:00.938 "flush": true, 00:08:00.938 "nvme_admin": false, 00:08:00.938 "nvme_io": false, 00:08:00.938 "read": true, 00:08:00.938 "reset": true, 00:08:00.938 "unmap": true, 00:08:00.938 "write": true, 00:08:00.938 "write_zeroes": true 00:08:00.938 }, 00:08:00.938 "uuid": "837fd65f-d714-4f86-afb4-d2f3f83488b8", 00:08:00.938 "zoned": false 00:08:00.938 } 00:08:00.938 ]' 00:08:00.938 08:48:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:00.938 08:48:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:00.939 08:48:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:00.939 08:48:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.939 08:48:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:00.939 08:48:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.939 08:48:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:00.939 08:48:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.939 08:48:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:00.939 08:48:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.939 08:48:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:00.939 08:48:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:00.939 ************************************ 00:08:00.939 END TEST rpc_plugins 00:08:00.939 ************************************ 00:08:00.939 08:48:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:00.939 00:08:00.939 real 0m0.183s 00:08:00.939 user 0m0.119s 00:08:00.939 sys 0m0.025s 00:08:00.939 08:48:17 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.939 08:48:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:00.939 08:48:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:00.939 08:48:17 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:00.939 08:48:17 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.939 08:48:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.939 ************************************ 00:08:00.939 START TEST rpc_trace_cmd_test 00:08:00.939 ************************************ 00:08:00.939 08:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:08:00.939 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:00.939 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:00.939 08:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.939 08:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.197 08:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.197 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:01.197 "bdev": { 00:08:01.197 "mask": "0x8", 00:08:01.197 "tpoint_mask": "0xffffffffffffffff" 00:08:01.197 }, 00:08:01.198 "bdev_nvme": { 00:08:01.198 "mask": "0x4000", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "blobfs": { 00:08:01.198 "mask": "0x80", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "dsa": { 00:08:01.198 "mask": "0x200", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "ftl": { 00:08:01.198 "mask": "0x40", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "iaa": { 00:08:01.198 "mask": "0x1000", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "iscsi_conn": { 00:08:01.198 "mask": "0x2", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "nvme_pcie": { 00:08:01.198 "mask": "0x800", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "nvme_tcp": { 00:08:01.198 "mask": "0x2000", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "nvmf_rdma": { 00:08:01.198 "mask": "0x10", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "nvmf_tcp": { 00:08:01.198 "mask": "0x20", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "scsi": { 00:08:01.198 "mask": "0x4", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "sock": { 00:08:01.198 "mask": "0x8000", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "thread": { 00:08:01.198 "mask": "0x400", 00:08:01.198 "tpoint_mask": "0x0" 00:08:01.198 }, 00:08:01.198 "tpoint_group_mask": "0x8", 00:08:01.198 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59985" 00:08:01.198 }' 00:08:01.198 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:01.198 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:08:01.198 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:01.198 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:01.198 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:01.198 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:01.198 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:01.198 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:01.198 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:01.457 ************************************ 00:08:01.457 END TEST rpc_trace_cmd_test 00:08:01.457 ************************************ 00:08:01.457 08:48:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:01.457 00:08:01.457 real 0m0.288s 00:08:01.457 user 0m0.254s 00:08:01.457 sys 0m0.024s 00:08:01.457 08:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.457 08:48:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.457 08:48:17 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:08:01.457 08:48:17 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:08:01.457 08:48:17 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:01.457 08:48:17 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.457 08:48:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.457 ************************************ 00:08:01.457 START TEST go_rpc 00:08:01.457 ************************************ 00:08:01.457 08:48:17 rpc.go_rpc -- common/autotest_common.sh@1121 -- # go_rpc 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:08:01.457 08:48:17 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.457 08:48:17 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.457 08:48:17 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["91f275db-6f6d-4d59-aac3-4adf3384eab9"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"91f275db-6f6d-4d59-aac3-4adf3384eab9","zoned":false}]' 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:01.457 08:48:17 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.457 08:48:17 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.457 08:48:17 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:08:01.457 08:48:17 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:08:01.716 ************************************ 00:08:01.716 END TEST go_rpc 00:08:01.716 ************************************ 00:08:01.716 08:48:17 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:08:01.716 00:08:01.716 real 0m0.230s 00:08:01.716 user 0m0.158s 00:08:01.716 sys 0m0.033s 00:08:01.716 08:48:17 rpc.go_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.716 08:48:17 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 08:48:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:01.716 08:48:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:01.716 08:48:17 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:01.716 08:48:17 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.716 08:48:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 ************************************ 00:08:01.716 START TEST rpc_daemon_integrity 00:08:01.716 ************************************ 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:01.716 { 00:08:01.716 "aliases": [ 00:08:01.716 "762b8e03-2e0d-4138-a869-21e1ad352224" 00:08:01.716 ], 00:08:01.716 "assigned_rate_limits": { 00:08:01.716 "r_mbytes_per_sec": 0, 00:08:01.716 "rw_ios_per_sec": 0, 00:08:01.716 "rw_mbytes_per_sec": 0, 00:08:01.716 "w_mbytes_per_sec": 0 00:08:01.716 }, 00:08:01.716 "block_size": 512, 00:08:01.716 "claimed": false, 00:08:01.716 "driver_specific": {}, 00:08:01.716 "memory_domains": [ 00:08:01.716 { 00:08:01.716 "dma_device_id": "system", 00:08:01.716 "dma_device_type": 1 00:08:01.716 }, 00:08:01.716 { 00:08:01.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.716 "dma_device_type": 2 00:08:01.716 } 00:08:01.716 ], 00:08:01.716 "name": "Malloc3", 00:08:01.716 "num_blocks": 16384, 00:08:01.716 "product_name": "Malloc disk", 00:08:01.716 "supported_io_types": { 00:08:01.716 "abort": true, 00:08:01.716 "compare": false, 00:08:01.716 "compare_and_write": false, 00:08:01.716 "flush": true, 00:08:01.716 "nvme_admin": false, 00:08:01.716 "nvme_io": false, 00:08:01.716 "read": true, 00:08:01.716 "reset": true, 00:08:01.716 "unmap": true, 00:08:01.716 "write": true, 00:08:01.716 "write_zeroes": true 00:08:01.716 }, 00:08:01.716 "uuid": "762b8e03-2e0d-4138-a869-21e1ad352224", 00:08:01.716 "zoned": false 00:08:01.716 } 00:08:01.716 ]' 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 [2024-05-15 08:48:17.926493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:01.716 [2024-05-15 08:48:17.926547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.716 [2024-05-15 08:48:17.926582] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12b7fa0 00:08:01.716 [2024-05-15 08:48:17.926595] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.716 [2024-05-15 08:48:17.928014] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.716 [2024-05-15 08:48:17.928062] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:01.716 Passthru0 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.716 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.975 08:48:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.975 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:01.975 { 00:08:01.975 "aliases": [ 00:08:01.975 "762b8e03-2e0d-4138-a869-21e1ad352224" 00:08:01.975 ], 00:08:01.975 "assigned_rate_limits": { 00:08:01.975 "r_mbytes_per_sec": 0, 00:08:01.975 "rw_ios_per_sec": 0, 00:08:01.975 "rw_mbytes_per_sec": 0, 00:08:01.975 "w_mbytes_per_sec": 0 00:08:01.975 }, 00:08:01.975 "block_size": 512, 00:08:01.975 "claim_type": "exclusive_write", 00:08:01.975 "claimed": true, 00:08:01.975 "driver_specific": {}, 00:08:01.975 "memory_domains": [ 00:08:01.975 { 00:08:01.975 "dma_device_id": "system", 00:08:01.975 "dma_device_type": 1 00:08:01.975 }, 00:08:01.975 { 00:08:01.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.975 "dma_device_type": 2 00:08:01.975 } 00:08:01.975 ], 00:08:01.975 "name": "Malloc3", 00:08:01.975 "num_blocks": 16384, 00:08:01.975 "product_name": "Malloc disk", 00:08:01.975 "supported_io_types": { 00:08:01.975 "abort": true, 00:08:01.975 "compare": false, 00:08:01.975 "compare_and_write": false, 00:08:01.975 "flush": true, 00:08:01.975 "nvme_admin": false, 00:08:01.975 "nvme_io": false, 00:08:01.975 "read": true, 00:08:01.975 "reset": true, 00:08:01.975 "unmap": true, 00:08:01.975 "write": true, 00:08:01.975 "write_zeroes": true 00:08:01.975 }, 00:08:01.975 "uuid": "762b8e03-2e0d-4138-a869-21e1ad352224", 00:08:01.975 "zoned": false 00:08:01.975 }, 00:08:01.975 { 00:08:01.975 "aliases": [ 00:08:01.975 "242a789e-f612-5321-8d0e-a25bb55b6af9" 00:08:01.975 ], 00:08:01.975 "assigned_rate_limits": { 00:08:01.975 "r_mbytes_per_sec": 0, 00:08:01.975 "rw_ios_per_sec": 0, 00:08:01.975 "rw_mbytes_per_sec": 0, 00:08:01.975 "w_mbytes_per_sec": 0 00:08:01.975 }, 00:08:01.975 "block_size": 512, 00:08:01.975 "claimed": false, 00:08:01.975 "driver_specific": { 00:08:01.975 "passthru": { 00:08:01.975 "base_bdev_name": "Malloc3", 00:08:01.975 "name": "Passthru0" 00:08:01.975 } 00:08:01.975 }, 00:08:01.975 "memory_domains": [ 00:08:01.975 { 00:08:01.975 "dma_device_id": "system", 00:08:01.975 "dma_device_type": 1 00:08:01.975 }, 00:08:01.975 { 00:08:01.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.975 "dma_device_type": 2 00:08:01.975 } 00:08:01.975 ], 00:08:01.975 "name": "Passthru0", 00:08:01.975 "num_blocks": 16384, 00:08:01.975 "product_name": "passthru", 00:08:01.975 "supported_io_types": { 00:08:01.975 "abort": true, 00:08:01.975 "compare": false, 00:08:01.975 "compare_and_write": false, 00:08:01.975 "flush": true, 00:08:01.975 "nvme_admin": false, 00:08:01.975 "nvme_io": false, 00:08:01.975 "read": true, 00:08:01.975 "reset": true, 00:08:01.975 "unmap": true, 00:08:01.975 "write": true, 00:08:01.975 "write_zeroes": true 00:08:01.975 }, 00:08:01.975 "uuid": "242a789e-f612-5321-8d0e-a25bb55b6af9", 00:08:01.975 "zoned": false 00:08:01.975 } 00:08:01.975 ]' 00:08:01.975 08:48:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:01.975 00:08:01.975 real 0m0.314s 00:08:01.975 user 0m0.203s 00:08:01.975 sys 0m0.042s 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.975 ************************************ 00:08:01.975 END TEST rpc_daemon_integrity 00:08:01.975 08:48:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:01.975 ************************************ 00:08:01.975 08:48:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:01.975 08:48:18 rpc -- rpc/rpc.sh@84 -- # killprocess 59985 00:08:01.975 08:48:18 rpc -- common/autotest_common.sh@946 -- # '[' -z 59985 ']' 00:08:01.975 08:48:18 rpc -- common/autotest_common.sh@950 -- # kill -0 59985 00:08:01.975 08:48:18 rpc -- common/autotest_common.sh@951 -- # uname 00:08:01.975 08:48:18 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:01.975 08:48:18 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59985 00:08:01.975 08:48:18 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:01.975 08:48:18 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:01.975 killing process with pid 59985 00:08:01.975 08:48:18 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59985' 00:08:01.975 08:48:18 rpc -- common/autotest_common.sh@965 -- # kill 59985 00:08:01.975 08:48:18 rpc -- common/autotest_common.sh@970 -- # wait 59985 00:08:02.233 00:08:02.233 real 0m2.408s 00:08:02.233 user 0m3.369s 00:08:02.233 sys 0m0.596s 00:08:02.233 08:48:18 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.233 08:48:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.233 ************************************ 00:08:02.233 END TEST rpc 00:08:02.233 ************************************ 00:08:02.492 08:48:18 -- spdk/autotest.sh@179 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:02.492 08:48:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:02.492 08:48:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.492 08:48:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.492 ************************************ 00:08:02.492 START TEST skip_rpc 00:08:02.492 ************************************ 00:08:02.492 08:48:18 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:02.492 * Looking for test storage... 00:08:02.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:02.492 08:48:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:02.492 08:48:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:02.492 08:48:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:02.492 08:48:18 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:02.492 08:48:18 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.492 08:48:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.492 ************************************ 00:08:02.492 START TEST skip_rpc 00:08:02.492 ************************************ 00:08:02.492 08:48:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:08:02.492 08:48:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60227 00:08:02.492 08:48:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:02.492 08:48:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:02.492 08:48:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:02.492 [2024-05-15 08:48:18.626011] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:02.492 [2024-05-15 08:48:18.626544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60227 ] 00:08:02.751 [2024-05-15 08:48:18.766063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.751 [2024-05-15 08:48:18.826582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.058 2024/05/15 08:48:23 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60227 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 60227 ']' 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 60227 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60227 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:08.058 killing process with pid 60227 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60227' 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 60227 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 60227 00:08:08.058 00:08:08.058 real 0m5.327s 00:08:08.058 user 0m5.040s 00:08:08.058 sys 0m0.190s 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.058 08:48:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.058 ************************************ 00:08:08.058 END TEST skip_rpc 00:08:08.058 ************************************ 00:08:08.058 08:48:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:08.058 08:48:23 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:08.058 08:48:23 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.058 08:48:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.058 ************************************ 00:08:08.058 START TEST skip_rpc_with_json 00:08:08.058 ************************************ 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60324 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60324 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 60324 ']' 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:08.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:08.058 08:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:08.058 [2024-05-15 08:48:24.008766] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:08.058 [2024-05-15 08:48:24.008876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60324 ] 00:08:08.058 [2024-05-15 08:48:24.144073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.058 [2024-05-15 08:48:24.223131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:08.995 [2024-05-15 08:48:25.049214] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:08.995 2024/05/15 08:48:25 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:08:08.995 request: 00:08:08.995 { 00:08:08.995 "method": "nvmf_get_transports", 00:08:08.995 "params": { 00:08:08.995 "trtype": "tcp" 00:08:08.995 } 00:08:08.995 } 00:08:08.995 Got JSON-RPC error response 00:08:08.995 GoRPCClient: error on JSON-RPC call 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:08.995 [2024-05-15 08:48:25.061281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.995 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:09.254 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.254 08:48:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:09.254 { 00:08:09.254 "subsystems": [ 00:08:09.254 { 00:08:09.254 "subsystem": "keyring", 00:08:09.254 "config": [] 00:08:09.254 }, 00:08:09.254 { 00:08:09.254 "subsystem": "iobuf", 00:08:09.254 "config": [ 00:08:09.254 { 00:08:09.254 "method": "iobuf_set_options", 00:08:09.254 "params": { 00:08:09.254 "large_bufsize": 135168, 00:08:09.254 "large_pool_count": 1024, 00:08:09.254 "small_bufsize": 8192, 00:08:09.254 "small_pool_count": 8192 00:08:09.254 } 00:08:09.254 } 00:08:09.254 ] 00:08:09.254 }, 00:08:09.254 { 00:08:09.254 "subsystem": "sock", 00:08:09.254 "config": [ 00:08:09.254 { 00:08:09.254 "method": "sock_set_default_impl", 00:08:09.254 "params": { 00:08:09.254 "impl_name": "posix" 00:08:09.254 } 00:08:09.254 }, 00:08:09.254 { 00:08:09.254 "method": "sock_impl_set_options", 00:08:09.254 "params": { 00:08:09.254 "enable_ktls": false, 00:08:09.254 "enable_placement_id": 0, 00:08:09.254 "enable_quickack": false, 00:08:09.254 "enable_recv_pipe": true, 00:08:09.254 "enable_zerocopy_send_client": false, 00:08:09.254 "enable_zerocopy_send_server": true, 00:08:09.254 "impl_name": "ssl", 00:08:09.254 "recv_buf_size": 4096, 00:08:09.254 "send_buf_size": 4096, 00:08:09.254 "tls_version": 0, 00:08:09.254 "zerocopy_threshold": 0 00:08:09.254 } 00:08:09.254 }, 00:08:09.254 { 00:08:09.254 "method": "sock_impl_set_options", 00:08:09.254 "params": { 00:08:09.254 "enable_ktls": false, 00:08:09.254 "enable_placement_id": 0, 00:08:09.254 "enable_quickack": false, 00:08:09.254 "enable_recv_pipe": true, 00:08:09.254 "enable_zerocopy_send_client": false, 00:08:09.254 "enable_zerocopy_send_server": true, 00:08:09.254 "impl_name": "posix", 00:08:09.254 "recv_buf_size": 2097152, 00:08:09.254 "send_buf_size": 2097152, 00:08:09.254 "tls_version": 0, 00:08:09.254 "zerocopy_threshold": 0 00:08:09.254 } 00:08:09.254 } 00:08:09.254 ] 00:08:09.254 }, 00:08:09.254 { 00:08:09.254 "subsystem": "vmd", 00:08:09.254 "config": [] 00:08:09.254 }, 00:08:09.254 { 00:08:09.254 "subsystem": "accel", 00:08:09.254 "config": [ 00:08:09.254 { 00:08:09.254 "method": "accel_set_options", 00:08:09.254 "params": { 00:08:09.254 "buf_count": 2048, 00:08:09.254 "large_cache_size": 16, 00:08:09.255 "sequence_count": 2048, 00:08:09.255 "small_cache_size": 128, 00:08:09.255 "task_count": 2048 00:08:09.255 } 00:08:09.255 } 00:08:09.255 ] 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "subsystem": "bdev", 00:08:09.255 "config": [ 00:08:09.255 { 00:08:09.255 "method": "bdev_set_options", 00:08:09.255 "params": { 00:08:09.255 "bdev_auto_examine": true, 00:08:09.255 "bdev_io_cache_size": 256, 00:08:09.255 "bdev_io_pool_size": 65535, 00:08:09.255 "iobuf_large_cache_size": 16, 00:08:09.255 "iobuf_small_cache_size": 128 00:08:09.255 } 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "method": "bdev_raid_set_options", 00:08:09.255 "params": { 00:08:09.255 "process_window_size_kb": 1024 00:08:09.255 } 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "method": "bdev_iscsi_set_options", 00:08:09.255 "params": { 00:08:09.255 "timeout_sec": 30 00:08:09.255 } 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "method": "bdev_nvme_set_options", 00:08:09.255 "params": { 00:08:09.255 "action_on_timeout": "none", 00:08:09.255 "allow_accel_sequence": false, 00:08:09.255 "arbitration_burst": 0, 00:08:09.255 "bdev_retry_count": 3, 00:08:09.255 "ctrlr_loss_timeout_sec": 0, 00:08:09.255 "delay_cmd_submit": true, 00:08:09.255 "dhchap_dhgroups": [ 00:08:09.255 "null", 00:08:09.255 "ffdhe2048", 00:08:09.255 "ffdhe3072", 00:08:09.255 "ffdhe4096", 00:08:09.255 "ffdhe6144", 00:08:09.255 "ffdhe8192" 00:08:09.255 ], 00:08:09.255 "dhchap_digests": [ 00:08:09.255 "sha256", 00:08:09.255 "sha384", 00:08:09.255 "sha512" 00:08:09.255 ], 00:08:09.255 "disable_auto_failback": false, 00:08:09.255 "fast_io_fail_timeout_sec": 0, 00:08:09.255 "generate_uuids": false, 00:08:09.255 "high_priority_weight": 0, 00:08:09.255 "io_path_stat": false, 00:08:09.255 "io_queue_requests": 0, 00:08:09.255 "keep_alive_timeout_ms": 10000, 00:08:09.255 "low_priority_weight": 0, 00:08:09.255 "medium_priority_weight": 0, 00:08:09.255 "nvme_adminq_poll_period_us": 10000, 00:08:09.255 "nvme_error_stat": false, 00:08:09.255 "nvme_ioq_poll_period_us": 0, 00:08:09.255 "rdma_cm_event_timeout_ms": 0, 00:08:09.255 "rdma_max_cq_size": 0, 00:08:09.255 "rdma_srq_size": 0, 00:08:09.255 "reconnect_delay_sec": 0, 00:08:09.255 "timeout_admin_us": 0, 00:08:09.255 "timeout_us": 0, 00:08:09.255 "transport_ack_timeout": 0, 00:08:09.255 "transport_retry_count": 4, 00:08:09.255 "transport_tos": 0 00:08:09.255 } 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "method": "bdev_nvme_set_hotplug", 00:08:09.255 "params": { 00:08:09.255 "enable": false, 00:08:09.255 "period_us": 100000 00:08:09.255 } 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "method": "bdev_wait_for_examine" 00:08:09.255 } 00:08:09.255 ] 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "subsystem": "scsi", 00:08:09.255 "config": null 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "subsystem": "scheduler", 00:08:09.255 "config": [ 00:08:09.255 { 00:08:09.255 "method": "framework_set_scheduler", 00:08:09.255 "params": { 00:08:09.255 "name": "static" 00:08:09.255 } 00:08:09.255 } 00:08:09.255 ] 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "subsystem": "vhost_scsi", 00:08:09.255 "config": [] 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "subsystem": "vhost_blk", 00:08:09.255 "config": [] 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "subsystem": "ublk", 00:08:09.255 "config": [] 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "subsystem": "nbd", 00:08:09.255 "config": [] 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "subsystem": "nvmf", 00:08:09.255 "config": [ 00:08:09.255 { 00:08:09.255 "method": "nvmf_set_config", 00:08:09.255 "params": { 00:08:09.255 "admin_cmd_passthru": { 00:08:09.255 "identify_ctrlr": false 00:08:09.255 }, 00:08:09.255 "discovery_filter": "match_any" 00:08:09.255 } 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "method": "nvmf_set_max_subsystems", 00:08:09.255 "params": { 00:08:09.255 "max_subsystems": 1024 00:08:09.255 } 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "method": "nvmf_set_crdt", 00:08:09.255 "params": { 00:08:09.255 "crdt1": 0, 00:08:09.255 "crdt2": 0, 00:08:09.255 "crdt3": 0 00:08:09.255 } 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "method": "nvmf_create_transport", 00:08:09.255 "params": { 00:08:09.255 "abort_timeout_sec": 1, 00:08:09.255 "ack_timeout": 0, 00:08:09.255 "buf_cache_size": 4294967295, 00:08:09.255 "c2h_success": true, 00:08:09.255 "data_wr_pool_size": 0, 00:08:09.255 "dif_insert_or_strip": false, 00:08:09.255 "in_capsule_data_size": 4096, 00:08:09.255 "io_unit_size": 131072, 00:08:09.255 "max_aq_depth": 128, 00:08:09.255 "max_io_qpairs_per_ctrlr": 127, 00:08:09.255 "max_io_size": 131072, 00:08:09.255 "max_queue_depth": 128, 00:08:09.255 "num_shared_buffers": 511, 00:08:09.255 "sock_priority": 0, 00:08:09.255 "trtype": "TCP", 00:08:09.255 "zcopy": false 00:08:09.255 } 00:08:09.255 } 00:08:09.255 ] 00:08:09.255 }, 00:08:09.255 { 00:08:09.255 "subsystem": "iscsi", 00:08:09.255 "config": [ 00:08:09.255 { 00:08:09.255 "method": "iscsi_set_options", 00:08:09.255 "params": { 00:08:09.255 "allow_duplicated_isid": false, 00:08:09.255 "chap_group": 0, 00:08:09.255 "data_out_pool_size": 2048, 00:08:09.255 "default_time2retain": 20, 00:08:09.255 "default_time2wait": 2, 00:08:09.255 "disable_chap": false, 00:08:09.255 "error_recovery_level": 0, 00:08:09.255 "first_burst_length": 8192, 00:08:09.255 "immediate_data": true, 00:08:09.255 "immediate_data_pool_size": 16384, 00:08:09.255 "max_connections_per_session": 2, 00:08:09.255 "max_large_datain_per_connection": 64, 00:08:09.255 "max_queue_depth": 64, 00:08:09.255 "max_r2t_per_connection": 4, 00:08:09.255 "max_sessions": 128, 00:08:09.255 "mutual_chap": false, 00:08:09.255 "node_base": "iqn.2016-06.io.spdk", 00:08:09.255 "nop_in_interval": 30, 00:08:09.255 "nop_timeout": 60, 00:08:09.255 "pdu_pool_size": 36864, 00:08:09.255 "require_chap": false 00:08:09.255 } 00:08:09.255 } 00:08:09.255 ] 00:08:09.255 } 00:08:09.255 ] 00:08:09.255 } 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60324 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 60324 ']' 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 60324 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60324 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:09.255 killing process with pid 60324 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60324' 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 60324 00:08:09.255 08:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 60324 00:08:09.513 08:48:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60359 00:08:09.513 08:48:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:09.513 08:48:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60359 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 60359 ']' 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 60359 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60359 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:14.815 killing process with pid 60359 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60359' 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 60359 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 60359 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:14.815 08:48:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:14.815 00:08:14.815 real 0m6.960s 00:08:14.815 user 0m6.897s 00:08:14.815 sys 0m0.501s 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:14.816 ************************************ 00:08:14.816 END TEST skip_rpc_with_json 00:08:14.816 ************************************ 00:08:14.816 08:48:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:14.816 08:48:30 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:14.816 08:48:30 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.816 08:48:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.816 ************************************ 00:08:14.816 START TEST skip_rpc_with_delay 00:08:14.816 ************************************ 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:14.816 08:48:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:14.816 [2024-05-15 08:48:31.018530] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:14.816 [2024-05-15 08:48:31.018671] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:14.816 08:48:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:08:14.816 08:48:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:14.816 08:48:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:14.816 08:48:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:14.816 00:08:14.816 real 0m0.082s 00:08:14.816 user 0m0.048s 00:08:14.816 sys 0m0.034s 00:08:14.816 08:48:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.816 08:48:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:14.816 ************************************ 00:08:14.816 END TEST skip_rpc_with_delay 00:08:14.816 ************************************ 00:08:15.078 08:48:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:15.078 08:48:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:15.078 08:48:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:15.078 08:48:31 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:15.078 08:48:31 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.078 08:48:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.078 ************************************ 00:08:15.078 START TEST exit_on_failed_rpc_init 00:08:15.078 ************************************ 00:08:15.078 08:48:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:08:15.078 08:48:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60469 00:08:15.078 08:48:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 60469 00:08:15.078 08:48:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 60469 ']' 00:08:15.078 08:48:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.078 08:48:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:15.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.078 08:48:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.078 08:48:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:15.078 08:48:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:15.078 08:48:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:15.078 [2024-05-15 08:48:31.153328] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:15.078 [2024-05-15 08:48:31.153432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60469 ] 00:08:15.078 [2024-05-15 08:48:31.291195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.337 [2024-05-15 08:48:31.353730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:16.272 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:16.272 [2024-05-15 08:48:32.213697] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:16.272 [2024-05-15 08:48:32.213798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60499 ] 00:08:16.272 [2024-05-15 08:48:32.354884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.272 [2024-05-15 08:48:32.441611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.272 [2024-05-15 08:48:32.441705] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:16.272 [2024-05-15 08:48:32.441722] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:16.272 [2024-05-15 08:48:32.441732] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 60469 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 60469 ']' 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 60469 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60469 00:08:16.531 killing process with pid 60469 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60469' 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 60469 00:08:16.531 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 60469 00:08:16.789 00:08:16.789 real 0m1.775s 00:08:16.789 user 0m2.239s 00:08:16.789 sys 0m0.302s 00:08:16.789 ************************************ 00:08:16.789 END TEST exit_on_failed_rpc_init 00:08:16.789 ************************************ 00:08:16.789 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.789 08:48:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:16.789 08:48:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:16.789 00:08:16.789 real 0m14.431s 00:08:16.789 user 0m14.317s 00:08:16.789 sys 0m1.209s 00:08:16.789 08:48:32 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.789 ************************************ 00:08:16.789 08:48:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.789 END TEST skip_rpc 00:08:16.789 ************************************ 00:08:16.789 08:48:32 -- spdk/autotest.sh@180 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:16.789 08:48:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:16.789 08:48:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.789 08:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:16.789 ************************************ 00:08:16.789 START TEST rpc_client 00:08:16.789 ************************************ 00:08:16.789 08:48:32 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:16.789 * Looking for test storage... 00:08:17.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:17.048 08:48:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:17.048 OK 00:08:17.048 08:48:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:17.048 ************************************ 00:08:17.048 END TEST rpc_client 00:08:17.048 ************************************ 00:08:17.048 00:08:17.048 real 0m0.095s 00:08:17.048 user 0m0.038s 00:08:17.048 sys 0m0.065s 00:08:17.048 08:48:33 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:17.048 08:48:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:17.048 08:48:33 -- spdk/autotest.sh@181 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:17.048 08:48:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:17.048 08:48:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:17.048 08:48:33 -- common/autotest_common.sh@10 -- # set +x 00:08:17.048 ************************************ 00:08:17.048 START TEST json_config 00:08:17.048 ************************************ 00:08:17.048 08:48:33 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.048 08:48:33 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.048 08:48:33 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.048 08:48:33 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.048 08:48:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.048 08:48:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.048 08:48:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.048 08:48:33 json_config -- paths/export.sh@5 -- # export PATH 00:08:17.048 08:48:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@47 -- # : 0 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.048 08:48:33 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:17.048 INFO: JSON configuration test init 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:08:17.048 08:48:33 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:17.048 08:48:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:08:17.048 08:48:33 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:17.048 08:48:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:17.048 Waiting for target to run... 00:08:17.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:17.048 08:48:33 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:08:17.048 08:48:33 json_config -- json_config/common.sh@9 -- # local app=target 00:08:17.048 08:48:33 json_config -- json_config/common.sh@10 -- # shift 00:08:17.048 08:48:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:17.048 08:48:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:17.048 08:48:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:17.048 08:48:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:17.048 08:48:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:17.048 08:48:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60617 00:08:17.048 08:48:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:17.048 08:48:33 json_config -- json_config/common.sh@25 -- # waitforlisten 60617 /var/tmp/spdk_tgt.sock 00:08:17.048 08:48:33 json_config -- common/autotest_common.sh@827 -- # '[' -z 60617 ']' 00:08:17.048 08:48:33 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:17.048 08:48:33 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:17.048 08:48:33 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:17.048 08:48:33 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:17.048 08:48:33 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:17.048 08:48:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:17.048 [2024-05-15 08:48:33.258081] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:17.048 [2024-05-15 08:48:33.258692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60617 ] 00:08:17.651 [2024-05-15 08:48:33.562978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.651 [2024-05-15 08:48:33.622518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.224 00:08:18.224 08:48:34 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:18.224 08:48:34 json_config -- common/autotest_common.sh@860 -- # return 0 00:08:18.224 08:48:34 json_config -- json_config/common.sh@26 -- # echo '' 00:08:18.224 08:48:34 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:08:18.224 08:48:34 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:08:18.224 08:48:34 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:18.224 08:48:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:18.224 08:48:34 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:08:18.224 08:48:34 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:08:18.224 08:48:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.224 08:48:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:18.224 08:48:34 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:18.224 08:48:34 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:08:18.224 08:48:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:18.792 08:48:34 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:08:18.792 08:48:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:18.792 08:48:34 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:18.792 08:48:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:18.792 08:48:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:18.792 08:48:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:18.792 08:48:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:18.792 08:48:34 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:08:18.792 08:48:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:18.792 08:48:34 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@48 -- # local get_types 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:08:19.050 08:48:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.050 08:48:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@55 -- # return 0 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:08:19.050 08:48:35 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:19.050 08:48:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:08:19.050 08:48:35 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:08:19.051 08:48:35 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:19.051 08:48:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:19.308 MallocForNvmf0 00:08:19.308 08:48:35 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:19.308 08:48:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:19.566 MallocForNvmf1 00:08:19.566 08:48:35 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:19.566 08:48:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:19.825 [2024-05-15 08:48:36.028397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.825 08:48:36 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:19.825 08:48:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:20.391 08:48:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:20.391 08:48:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:20.649 08:48:36 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:20.649 08:48:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:20.907 08:48:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:20.907 08:48:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:21.163 [2024-05-15 08:48:37.212732] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:21.163 [2024-05-15 08:48:37.213003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:21.164 08:48:37 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:08:21.164 08:48:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.164 08:48:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:21.164 08:48:37 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:08:21.164 08:48:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.164 08:48:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:21.164 08:48:37 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:08:21.164 08:48:37 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:21.164 08:48:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:21.421 MallocBdevForConfigChangeCheck 00:08:21.421 08:48:37 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:08:21.421 08:48:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.421 08:48:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:21.421 08:48:37 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:08:21.421 08:48:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:21.986 08:48:38 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:08:21.986 INFO: shutting down applications... 00:08:21.986 08:48:38 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:08:21.986 08:48:38 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:08:21.986 08:48:38 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:08:21.986 08:48:38 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:22.243 Calling clear_iscsi_subsystem 00:08:22.243 Calling clear_nvmf_subsystem 00:08:22.243 Calling clear_nbd_subsystem 00:08:22.243 Calling clear_ublk_subsystem 00:08:22.243 Calling clear_vhost_blk_subsystem 00:08:22.243 Calling clear_vhost_scsi_subsystem 00:08:22.243 Calling clear_bdev_subsystem 00:08:22.243 08:48:38 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:22.243 08:48:38 json_config -- json_config/json_config.sh@343 -- # count=100 00:08:22.243 08:48:38 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:08:22.243 08:48:38 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:22.243 08:48:38 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:22.243 08:48:38 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:22.808 08:48:38 json_config -- json_config/json_config.sh@345 -- # break 00:08:22.808 08:48:38 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:08:22.808 08:48:38 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:08:22.808 08:48:38 json_config -- json_config/common.sh@31 -- # local app=target 00:08:22.808 08:48:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:22.808 08:48:38 json_config -- json_config/common.sh@35 -- # [[ -n 60617 ]] 00:08:22.808 08:48:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60617 00:08:22.808 [2024-05-15 08:48:38.861056] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:22.808 08:48:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:22.808 08:48:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:22.808 08:48:38 json_config -- json_config/common.sh@41 -- # kill -0 60617 00:08:22.808 08:48:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:23.408 08:48:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:23.408 08:48:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:23.408 08:48:39 json_config -- json_config/common.sh@41 -- # kill -0 60617 00:08:23.408 08:48:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:23.408 08:48:39 json_config -- json_config/common.sh@43 -- # break 00:08:23.408 08:48:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:23.408 SPDK target shutdown done 00:08:23.408 08:48:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:23.408 INFO: relaunching applications... 00:08:23.408 08:48:39 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:08:23.408 08:48:39 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:23.408 08:48:39 json_config -- json_config/common.sh@9 -- # local app=target 00:08:23.408 08:48:39 json_config -- json_config/common.sh@10 -- # shift 00:08:23.408 08:48:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:23.408 08:48:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:23.408 08:48:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:23.408 08:48:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:23.408 08:48:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:23.408 08:48:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60897 00:08:23.408 08:48:39 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:23.408 Waiting for target to run... 00:08:23.408 08:48:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:23.408 08:48:39 json_config -- json_config/common.sh@25 -- # waitforlisten 60897 /var/tmp/spdk_tgt.sock 00:08:23.408 08:48:39 json_config -- common/autotest_common.sh@827 -- # '[' -z 60897 ']' 00:08:23.408 08:48:39 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:23.408 08:48:39 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:23.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:23.408 08:48:39 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:23.408 08:48:39 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:23.408 08:48:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:23.408 [2024-05-15 08:48:39.445505] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:23.408 [2024-05-15 08:48:39.445643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60897 ] 00:08:23.666 [2024-05-15 08:48:39.749213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.666 [2024-05-15 08:48:39.807697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.925 [2024-05-15 08:48:40.121944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.925 [2024-05-15 08:48:40.153775] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:23.925 [2024-05-15 08:48:40.154028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:24.493 08:48:40 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:24.493 08:48:40 json_config -- common/autotest_common.sh@860 -- # return 0 00:08:24.493 00:08:24.493 08:48:40 json_config -- json_config/common.sh@26 -- # echo '' 00:08:24.493 08:48:40 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:08:24.493 INFO: Checking if target configuration is the same... 00:08:24.493 08:48:40 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:24.493 08:48:40 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:24.493 08:48:40 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:08:24.493 08:48:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:24.493 + '[' 2 -ne 2 ']' 00:08:24.493 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:24.493 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:24.493 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:24.493 +++ basename /dev/fd/62 00:08:24.493 ++ mktemp /tmp/62.XXX 00:08:24.493 + tmp_file_1=/tmp/62.Y9V 00:08:24.493 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:24.493 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:24.493 + tmp_file_2=/tmp/spdk_tgt_config.json.yyI 00:08:24.493 + ret=0 00:08:24.493 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:24.752 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:24.752 + diff -u /tmp/62.Y9V /tmp/spdk_tgt_config.json.yyI 00:08:24.752 + echo 'INFO: JSON config files are the same' 00:08:24.752 INFO: JSON config files are the same 00:08:24.752 + rm /tmp/62.Y9V /tmp/spdk_tgt_config.json.yyI 00:08:24.752 + exit 0 00:08:24.752 08:48:40 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:08:24.752 INFO: changing configuration and checking if this can be detected... 00:08:24.752 08:48:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:24.752 08:48:40 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:24.752 08:48:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:25.012 08:48:41 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:25.012 08:48:41 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:08:25.012 08:48:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:25.012 + '[' 2 -ne 2 ']' 00:08:25.012 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:25.012 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:25.012 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:25.012 +++ basename /dev/fd/62 00:08:25.271 ++ mktemp /tmp/62.XXX 00:08:25.271 + tmp_file_1=/tmp/62.tls 00:08:25.271 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:25.271 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:25.271 + tmp_file_2=/tmp/spdk_tgt_config.json.ooo 00:08:25.271 + ret=0 00:08:25.271 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:25.530 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:25.530 + diff -u /tmp/62.tls /tmp/spdk_tgt_config.json.ooo 00:08:25.530 + ret=1 00:08:25.530 + echo '=== Start of file: /tmp/62.tls ===' 00:08:25.530 + cat /tmp/62.tls 00:08:25.530 + echo '=== End of file: /tmp/62.tls ===' 00:08:25.530 + echo '' 00:08:25.530 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ooo ===' 00:08:25.530 + cat /tmp/spdk_tgt_config.json.ooo 00:08:25.530 + echo '=== End of file: /tmp/spdk_tgt_config.json.ooo ===' 00:08:25.530 + echo '' 00:08:25.530 + rm /tmp/62.tls /tmp/spdk_tgt_config.json.ooo 00:08:25.530 + exit 1 00:08:25.530 INFO: configuration change detected. 00:08:25.530 08:48:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:08:25.530 08:48:41 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:08:25.530 08:48:41 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:08:25.530 08:48:41 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:25.530 08:48:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.530 08:48:41 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:08:25.530 08:48:41 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:08:25.530 08:48:41 json_config -- json_config/json_config.sh@317 -- # [[ -n 60897 ]] 00:08:25.530 08:48:41 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:08:25.530 08:48:41 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:08:25.530 08:48:41 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:25.530 08:48:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.530 08:48:41 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:08:25.789 08:48:41 json_config -- json_config/json_config.sh@193 -- # uname -s 00:08:25.789 08:48:41 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:08:25.789 08:48:41 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:08:25.789 08:48:41 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:08:25.789 08:48:41 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.789 08:48:41 json_config -- json_config/json_config.sh@323 -- # killprocess 60897 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@946 -- # '[' -z 60897 ']' 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@950 -- # kill -0 60897 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@951 -- # uname 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60897 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:25.789 killing process with pid 60897 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60897' 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@965 -- # kill 60897 00:08:25.789 [2024-05-15 08:48:41.838978] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:25.789 08:48:41 json_config -- common/autotest_common.sh@970 -- # wait 60897 00:08:26.048 08:48:42 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:26.048 08:48:42 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:08:26.048 08:48:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.048 08:48:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:26.048 08:48:42 json_config -- json_config/json_config.sh@328 -- # return 0 00:08:26.048 INFO: Success 00:08:26.048 08:48:42 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:08:26.048 00:08:26.048 real 0m8.982s 00:08:26.048 user 0m13.470s 00:08:26.048 sys 0m1.550s 00:08:26.048 08:48:42 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.048 ************************************ 00:08:26.048 END TEST json_config 00:08:26.048 08:48:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:26.048 ************************************ 00:08:26.048 08:48:42 -- spdk/autotest.sh@182 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:26.048 08:48:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:26.048 08:48:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.048 08:48:42 -- common/autotest_common.sh@10 -- # set +x 00:08:26.048 ************************************ 00:08:26.048 START TEST json_config_extra_key 00:08:26.048 ************************************ 00:08:26.048 08:48:42 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.048 08:48:42 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.048 08:48:42 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.048 08:48:42 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.048 08:48:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.048 08:48:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.048 08:48:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.048 08:48:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:26.048 08:48:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:26.048 08:48:42 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:26.048 INFO: launching applications... 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:26.048 08:48:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:26.049 08:48:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:26.049 08:48:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:26.049 08:48:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:26.049 08:48:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:26.049 08:48:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:26.049 08:48:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:26.049 08:48:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:26.049 08:48:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61066 00:08:26.049 Waiting for target to run... 00:08:26.049 08:48:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:26.049 08:48:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61066 /var/tmp/spdk_tgt.sock 00:08:26.049 08:48:42 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:26.049 08:48:42 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 61066 ']' 00:08:26.049 08:48:42 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:26.049 08:48:42 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:26.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:26.049 08:48:42 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:26.049 08:48:42 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:26.049 08:48:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:26.307 [2024-05-15 08:48:42.285785] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:26.307 [2024-05-15 08:48:42.285892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61066 ] 00:08:26.565 [2024-05-15 08:48:42.593942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.565 [2024-05-15 08:48:42.658148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.132 08:48:43 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:27.132 00:08:27.132 08:48:43 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:08:27.132 08:48:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:27.132 INFO: shutting down applications... 00:08:27.132 08:48:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:27.132 08:48:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:27.132 08:48:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:27.132 08:48:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:27.132 08:48:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61066 ]] 00:08:27.132 08:48:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61066 00:08:27.132 08:48:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:27.132 08:48:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:27.132 08:48:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61066 00:08:27.132 08:48:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:27.699 08:48:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:27.699 08:48:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:27.699 08:48:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61066 00:08:27.699 08:48:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:27.699 08:48:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:27.699 08:48:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:27.699 SPDK target shutdown done 00:08:27.699 Success 00:08:27.699 08:48:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:27.699 08:48:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:27.699 00:08:27.699 real 0m1.689s 00:08:27.699 user 0m1.646s 00:08:27.699 sys 0m0.315s 00:08:27.699 ************************************ 00:08:27.699 END TEST json_config_extra_key 00:08:27.699 ************************************ 00:08:27.699 08:48:43 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.699 08:48:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:27.699 08:48:43 -- spdk/autotest.sh@183 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:27.699 08:48:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.699 08:48:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.699 08:48:43 -- common/autotest_common.sh@10 -- # set +x 00:08:27.699 ************************************ 00:08:27.699 START TEST alias_rpc 00:08:27.699 ************************************ 00:08:27.699 08:48:43 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:27.958 * Looking for test storage... 00:08:27.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:27.958 08:48:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:27.958 08:48:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61148 00:08:27.958 08:48:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:27.958 08:48:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61148 00:08:27.958 08:48:43 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 61148 ']' 00:08:27.958 08:48:43 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.958 08:48:43 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:27.958 08:48:43 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.958 08:48:43 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:27.958 08:48:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.958 [2024-05-15 08:48:44.019646] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:27.958 [2024-05-15 08:48:44.019756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61148 ] 00:08:27.958 [2024-05-15 08:48:44.158654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.216 [2024-05-15 08:48:44.228592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.152 08:48:45 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:29.152 08:48:45 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:29.152 08:48:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:29.152 08:48:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61148 00:08:29.152 08:48:45 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 61148 ']' 00:08:29.152 08:48:45 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 61148 00:08:29.152 08:48:45 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:08:29.152 08:48:45 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:29.152 08:48:45 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61148 00:08:29.452 killing process with pid 61148 00:08:29.452 08:48:45 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:29.452 08:48:45 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:29.452 08:48:45 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61148' 00:08:29.452 08:48:45 alias_rpc -- common/autotest_common.sh@965 -- # kill 61148 00:08:29.452 08:48:45 alias_rpc -- common/autotest_common.sh@970 -- # wait 61148 00:08:29.710 ************************************ 00:08:29.710 END TEST alias_rpc 00:08:29.710 ************************************ 00:08:29.710 00:08:29.710 real 0m1.815s 00:08:29.710 user 0m2.256s 00:08:29.710 sys 0m0.350s 00:08:29.710 08:48:45 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.710 08:48:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.710 08:48:45 -- spdk/autotest.sh@185 -- # [[ 1 -eq 0 ]] 00:08:29.710 08:48:45 -- spdk/autotest.sh@189 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:29.710 08:48:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:29.710 08:48:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.710 08:48:45 -- common/autotest_common.sh@10 -- # set +x 00:08:29.710 ************************************ 00:08:29.710 START TEST dpdk_mem_utility 00:08:29.710 ************************************ 00:08:29.710 08:48:45 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:29.710 * Looking for test storage... 00:08:29.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:29.710 08:48:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:29.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.710 08:48:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61240 00:08:29.710 08:48:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:29.710 08:48:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61240 00:08:29.710 08:48:45 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 61240 ']' 00:08:29.710 08:48:45 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.710 08:48:45 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:29.710 08:48:45 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.710 08:48:45 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:29.710 08:48:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:29.710 [2024-05-15 08:48:45.900110] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:29.710 [2024-05-15 08:48:45.900221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61240 ] 00:08:29.968 [2024-05-15 08:48:46.038972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.969 [2024-05-15 08:48:46.110066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.905 08:48:46 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:30.905 08:48:46 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:08:30.905 08:48:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:30.905 08:48:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:30.905 08:48:46 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.905 08:48:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:30.905 { 00:08:30.905 "filename": "/tmp/spdk_mem_dump.txt" 00:08:30.905 } 00:08:30.905 08:48:46 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.905 08:48:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:30.905 DPDK memory size 814.000000 MiB in 1 heap(s) 00:08:30.905 1 heaps totaling size 814.000000 MiB 00:08:30.905 size: 814.000000 MiB heap id: 0 00:08:30.905 end heaps---------- 00:08:30.905 8 mempools totaling size 598.116089 MiB 00:08:30.905 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:30.905 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:30.905 size: 84.521057 MiB name: bdev_io_61240 00:08:30.905 size: 51.011292 MiB name: evtpool_61240 00:08:30.905 size: 50.003479 MiB name: msgpool_61240 00:08:30.905 size: 21.763794 MiB name: PDU_Pool 00:08:30.905 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:30.905 size: 0.026123 MiB name: Session_Pool 00:08:30.905 end mempools------- 00:08:30.905 6 memzones totaling size 4.142822 MiB 00:08:30.905 size: 1.000366 MiB name: RG_ring_0_61240 00:08:30.905 size: 1.000366 MiB name: RG_ring_1_61240 00:08:30.905 size: 1.000366 MiB name: RG_ring_4_61240 00:08:30.905 size: 1.000366 MiB name: RG_ring_5_61240 00:08:30.905 size: 0.125366 MiB name: RG_ring_2_61240 00:08:30.905 size: 0.015991 MiB name: RG_ring_3_61240 00:08:30.905 end memzones------- 00:08:30.905 08:48:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:30.905 heap id: 0 total size: 814.000000 MiB number of busy elements: 224 number of free elements: 15 00:08:30.905 list of free elements. size: 12.485840 MiB 00:08:30.905 element at address: 0x200000400000 with size: 1.999512 MiB 00:08:30.905 element at address: 0x200018e00000 with size: 0.999878 MiB 00:08:30.905 element at address: 0x200019000000 with size: 0.999878 MiB 00:08:30.905 element at address: 0x200003e00000 with size: 0.996277 MiB 00:08:30.905 element at address: 0x200031c00000 with size: 0.994446 MiB 00:08:30.905 element at address: 0x200013800000 with size: 0.978699 MiB 00:08:30.905 element at address: 0x200007000000 with size: 0.959839 MiB 00:08:30.905 element at address: 0x200019200000 with size: 0.936584 MiB 00:08:30.905 element at address: 0x200000200000 with size: 0.837036 MiB 00:08:30.905 element at address: 0x20001aa00000 with size: 0.571899 MiB 00:08:30.906 element at address: 0x20000b200000 with size: 0.489807 MiB 00:08:30.906 element at address: 0x200000800000 with size: 0.487061 MiB 00:08:30.906 element at address: 0x200019400000 with size: 0.485657 MiB 00:08:30.906 element at address: 0x200027e00000 with size: 0.398499 MiB 00:08:30.906 element at address: 0x200003a00000 with size: 0.350769 MiB 00:08:30.906 list of standard malloc elements. size: 199.251587 MiB 00:08:30.906 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:08:30.906 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:08:30.906 element at address: 0x200018efff80 with size: 1.000122 MiB 00:08:30.906 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:08:30.906 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:30.906 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:30.906 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:08:30.906 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:30.906 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:08:30.906 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003adb300 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003adb500 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003affa80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003affb40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:08:30.906 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:08:30.907 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e66040 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e66100 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6cd00 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:08:30.907 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:08:30.907 list of memzone associated elements. size: 602.262573 MiB 00:08:30.907 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:08:30.907 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:30.907 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:08:30.907 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:30.907 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:08:30.907 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61240_0 00:08:30.907 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:08:30.907 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61240_0 00:08:30.907 element at address: 0x200003fff380 with size: 48.003052 MiB 00:08:30.907 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61240_0 00:08:30.907 element at address: 0x2000195be940 with size: 20.255554 MiB 00:08:30.907 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:30.907 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:08:30.907 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:30.907 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:08:30.907 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61240 00:08:30.907 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:08:30.907 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61240 00:08:30.907 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:30.907 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61240 00:08:30.907 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:08:30.907 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:30.907 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:08:30.907 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:30.907 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:08:30.907 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:30.907 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:08:30.907 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:30.907 element at address: 0x200003eff180 with size: 1.000488 MiB 00:08:30.907 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61240 00:08:30.907 element at address: 0x200003affc00 with size: 1.000488 MiB 00:08:30.907 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61240 00:08:30.907 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:08:30.907 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61240 00:08:30.907 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:08:30.907 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61240 00:08:30.907 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:08:30.907 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61240 00:08:30.907 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:08:30.907 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:30.907 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:08:30.907 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:30.907 element at address: 0x20001947c540 with size: 0.250488 MiB 00:08:30.907 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:30.907 element at address: 0x200003adf880 with size: 0.125488 MiB 00:08:30.907 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61240 00:08:30.907 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:08:30.907 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:30.907 element at address: 0x200027e661c0 with size: 0.023743 MiB 00:08:30.907 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:30.907 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:08:30.907 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61240 00:08:30.907 element at address: 0x200027e6c300 with size: 0.002441 MiB 00:08:30.907 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:30.907 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:08:30.907 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61240 00:08:30.907 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:08:30.907 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61240 00:08:30.907 element at address: 0x200027e6cdc0 with size: 0.000305 MiB 00:08:30.907 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:30.907 08:48:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:30.907 08:48:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61240 00:08:30.907 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 61240 ']' 00:08:30.907 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 61240 00:08:30.907 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:08:30.907 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:30.907 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61240 00:08:30.907 killing process with pid 61240 00:08:30.907 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:30.907 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:30.907 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61240' 00:08:30.907 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 61240 00:08:30.907 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 61240 00:08:31.166 00:08:31.166 real 0m1.615s 00:08:31.166 user 0m1.895s 00:08:31.166 sys 0m0.331s 00:08:31.166 ************************************ 00:08:31.166 END TEST dpdk_mem_utility 00:08:31.166 ************************************ 00:08:31.166 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:31.166 08:48:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:31.424 08:48:47 -- spdk/autotest.sh@190 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:31.424 08:48:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:31.424 08:48:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:31.424 08:48:47 -- common/autotest_common.sh@10 -- # set +x 00:08:31.424 ************************************ 00:08:31.424 START TEST event 00:08:31.424 ************************************ 00:08:31.424 08:48:47 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:31.424 * Looking for test storage... 00:08:31.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:31.424 08:48:47 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:31.424 08:48:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:31.424 08:48:47 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:31.424 08:48:47 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:31.424 08:48:47 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:31.424 08:48:47 event -- common/autotest_common.sh@10 -- # set +x 00:08:31.424 ************************************ 00:08:31.424 START TEST event_perf 00:08:31.424 ************************************ 00:08:31.424 08:48:47 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:31.425 Running I/O for 1 seconds...[2024-05-15 08:48:47.518342] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:31.425 [2024-05-15 08:48:47.518489] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61330 ] 00:08:31.683 [2024-05-15 08:48:47.660613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.683 [2024-05-15 08:48:47.735403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.683 [2024-05-15 08:48:47.735531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.683 [2024-05-15 08:48:47.735649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.683 [2024-05-15 08:48:47.735652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.617 Running I/O for 1 seconds... 00:08:32.617 lcore 0: 179156 00:08:32.617 lcore 1: 179155 00:08:32.617 lcore 2: 179154 00:08:32.617 lcore 3: 179155 00:08:32.617 done. 00:08:32.617 00:08:32.617 real 0m1.336s 00:08:32.617 ************************************ 00:08:32.617 END TEST event_perf 00:08:32.617 ************************************ 00:08:32.617 user 0m4.158s 00:08:32.617 sys 0m0.057s 00:08:32.617 08:48:48 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.617 08:48:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:32.875 08:48:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:32.875 08:48:48 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:32.875 08:48:48 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.875 08:48:48 event -- common/autotest_common.sh@10 -- # set +x 00:08:32.875 ************************************ 00:08:32.875 START TEST event_reactor 00:08:32.875 ************************************ 00:08:32.875 08:48:48 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:32.875 [2024-05-15 08:48:48.903775] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:32.876 [2024-05-15 08:48:48.903869] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61367 ] 00:08:32.876 [2024-05-15 08:48:49.031935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.876 [2024-05-15 08:48:49.091219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.250 test_start 00:08:34.250 oneshot 00:08:34.250 tick 100 00:08:34.250 tick 100 00:08:34.250 tick 250 00:08:34.250 tick 100 00:08:34.250 tick 100 00:08:34.250 tick 100 00:08:34.250 tick 250 00:08:34.250 tick 500 00:08:34.250 tick 100 00:08:34.250 tick 100 00:08:34.250 tick 250 00:08:34.250 tick 100 00:08:34.250 tick 100 00:08:34.250 test_end 00:08:34.250 00:08:34.250 real 0m1.303s 00:08:34.250 user 0m1.155s 00:08:34.250 sys 0m0.042s 00:08:34.250 08:48:50 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:34.250 08:48:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:34.250 ************************************ 00:08:34.250 END TEST event_reactor 00:08:34.250 ************************************ 00:08:34.250 08:48:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:34.250 08:48:50 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:34.250 08:48:50 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:34.250 08:48:50 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.250 ************************************ 00:08:34.250 START TEST event_reactor_perf 00:08:34.250 ************************************ 00:08:34.250 08:48:50 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:34.250 [2024-05-15 08:48:50.258043] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:34.250 [2024-05-15 08:48:50.258144] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61398 ] 00:08:34.250 [2024-05-15 08:48:50.395660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.250 [2024-05-15 08:48:50.466169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.681 test_start 00:08:35.681 test_end 00:08:35.681 Performance: 338274 events per second 00:08:35.681 00:08:35.681 real 0m1.326s 00:08:35.681 user 0m1.179s 00:08:35.681 sys 0m0.041s 00:08:35.681 08:48:51 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.681 ************************************ 00:08:35.681 END TEST event_reactor_perf 00:08:35.681 ************************************ 00:08:35.681 08:48:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:35.681 08:48:51 event -- event/event.sh@49 -- # uname -s 00:08:35.681 08:48:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:35.681 08:48:51 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:35.681 08:48:51 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:35.681 08:48:51 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.681 08:48:51 event -- common/autotest_common.sh@10 -- # set +x 00:08:35.681 ************************************ 00:08:35.681 START TEST event_scheduler 00:08:35.681 ************************************ 00:08:35.681 08:48:51 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:35.681 * Looking for test storage... 00:08:35.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:35.681 08:48:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:35.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.681 08:48:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61460 00:08:35.681 08:48:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:35.681 08:48:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:35.681 08:48:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61460 00:08:35.681 08:48:51 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 61460 ']' 00:08:35.681 08:48:51 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.681 08:48:51 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:35.681 08:48:51 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.681 08:48:51 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:35.681 08:48:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:35.681 [2024-05-15 08:48:51.759336] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:35.681 [2024-05-15 08:48:51.759664] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61460 ] 00:08:35.681 [2024-05-15 08:48:51.903803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.940 [2024-05-15 08:48:52.002337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.940 [2024-05-15 08:48:52.002435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.940 [2024-05-15 08:48:52.002488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.940 [2024-05-15 08:48:52.002488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.877 08:48:52 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:36.877 08:48:52 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:08:36.877 08:48:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:36.877 08:48:52 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.877 08:48:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:36.877 POWER: Env isn't set yet! 00:08:36.877 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:36.877 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:36.877 POWER: Cannot set governor of lcore 0 to userspace 00:08:36.877 POWER: Attempting to initialise PSTAT power management... 00:08:36.877 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:36.877 POWER: Cannot set governor of lcore 0 to performance 00:08:36.877 POWER: Attempting to initialise AMD PSTATE power management... 00:08:36.877 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:36.877 POWER: Cannot set governor of lcore 0 to userspace 00:08:36.877 POWER: Attempting to initialise CPPC power management... 00:08:36.877 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:36.877 POWER: Cannot set governor of lcore 0 to userspace 00:08:36.877 POWER: Attempting to initialise VM power management... 00:08:36.877 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:36.877 POWER: Unable to set Power Management Environment for lcore 0 00:08:36.877 [2024-05-15 08:48:52.792400] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:08:36.877 [2024-05-15 08:48:52.792426] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:08:36.877 [2024-05-15 08:48:52.792434] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:08:36.877 08:48:52 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.877 08:48:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:36.877 08:48:52 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.877 08:48:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:36.877 [2024-05-15 08:48:52.847636] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:36.877 08:48:52 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.877 08:48:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:36.877 08:48:52 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:36.877 08:48:52 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.877 08:48:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:36.877 ************************************ 00:08:36.877 START TEST scheduler_create_thread 00:08:36.877 ************************************ 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.877 2 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.877 3 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.877 4 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:36.877 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.878 5 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.878 6 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.878 7 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.878 8 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.878 9 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.878 10 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.878 08:48:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.252 08:48:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.253 08:48:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:38.253 08:48:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:38.253 08:48:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.253 08:48:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:39.628 08:48:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.628 00:08:39.628 real 0m2.613s 00:08:39.628 user 0m0.017s 00:08:39.628 sys 0m0.008s 00:08:39.628 08:48:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:39.628 08:48:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:39.628 ************************************ 00:08:39.628 END TEST scheduler_create_thread 00:08:39.628 ************************************ 00:08:39.628 08:48:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:39.628 08:48:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61460 00:08:39.628 08:48:55 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 61460 ']' 00:08:39.628 08:48:55 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 61460 00:08:39.628 08:48:55 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:08:39.628 08:48:55 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:39.628 08:48:55 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61460 00:08:39.628 08:48:55 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:08:39.628 08:48:55 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:08:39.628 killing process with pid 61460 00:08:39.628 08:48:55 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61460' 00:08:39.628 08:48:55 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 61460 00:08:39.628 08:48:55 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 61460 00:08:39.885 [2024-05-15 08:48:55.951734] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:40.144 00:08:40.144 real 0m4.533s 00:08:40.144 user 0m8.791s 00:08:40.144 sys 0m0.312s 00:08:40.144 08:48:56 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.144 08:48:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:40.144 ************************************ 00:08:40.144 END TEST event_scheduler 00:08:40.144 ************************************ 00:08:40.144 08:48:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:40.144 08:48:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:40.144 08:48:56 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:40.144 08:48:56 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.144 08:48:56 event -- common/autotest_common.sh@10 -- # set +x 00:08:40.144 ************************************ 00:08:40.144 START TEST app_repeat 00:08:40.144 ************************************ 00:08:40.144 08:48:56 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61577 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:40.144 Process app_repeat pid: 61577 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61577' 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:40.144 spdk_app_start Round 0 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:40.144 08:48:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61577 /var/tmp/spdk-nbd.sock 00:08:40.144 08:48:56 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61577 ']' 00:08:40.144 08:48:56 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:40.144 08:48:56 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:40.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:40.144 08:48:56 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:40.144 08:48:56 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:40.144 08:48:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:40.144 [2024-05-15 08:48:56.234103] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:40.144 [2024-05-15 08:48:56.234193] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61577 ] 00:08:40.144 [2024-05-15 08:48:56.370437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:40.401 [2024-05-15 08:48:56.430981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.401 [2024-05-15 08:48:56.430990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.401 08:48:56 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:40.401 08:48:56 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:08:40.401 08:48:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.659 Malloc0 00:08:40.659 08:48:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.916 Malloc1 00:08:40.916 08:48:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:40.916 08:48:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:41.174 /dev/nbd0 00:08:41.174 08:48:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:41.174 08:48:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:41.174 08:48:57 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:41.174 08:48:57 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:08:41.174 08:48:57 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:41.174 08:48:57 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:41.174 08:48:57 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:41.174 08:48:57 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:08:41.174 08:48:57 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:41.174 08:48:57 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:41.174 08:48:57 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.174 1+0 records in 00:08:41.174 1+0 records out 00:08:41.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301624 s, 13.6 MB/s 00:08:41.174 08:48:57 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.174 08:48:57 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:08:41.175 08:48:57 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.175 08:48:57 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:41.175 08:48:57 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:08:41.175 08:48:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.175 08:48:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.175 08:48:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:41.434 /dev/nbd1 00:08:41.434 08:48:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:41.434 08:48:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.434 1+0 records in 00:08:41.434 1+0 records out 00:08:41.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285271 s, 14.4 MB/s 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:41.434 08:48:57 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:08:41.434 08:48:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.434 08:48:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.434 08:48:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:41.434 08:48:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.434 08:48:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:41.693 08:48:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:41.693 { 00:08:41.693 "bdev_name": "Malloc0", 00:08:41.693 "nbd_device": "/dev/nbd0" 00:08:41.693 }, 00:08:41.693 { 00:08:41.693 "bdev_name": "Malloc1", 00:08:41.693 "nbd_device": "/dev/nbd1" 00:08:41.693 } 00:08:41.693 ]' 00:08:41.693 08:48:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:41.693 { 00:08:41.693 "bdev_name": "Malloc0", 00:08:41.693 "nbd_device": "/dev/nbd0" 00:08:41.693 }, 00:08:41.693 { 00:08:41.693 "bdev_name": "Malloc1", 00:08:41.693 "nbd_device": "/dev/nbd1" 00:08:41.693 } 00:08:41.693 ]' 00:08:41.693 08:48:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:41.951 /dev/nbd1' 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:41.951 /dev/nbd1' 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:41.951 256+0 records in 00:08:41.951 256+0 records out 00:08:41.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00886855 s, 118 MB/s 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:41.951 256+0 records in 00:08:41.951 256+0 records out 00:08:41.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258935 s, 40.5 MB/s 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.951 08:48:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:41.951 256+0 records in 00:08:41.951 256+0 records out 00:08:41.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274842 s, 38.2 MB/s 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.951 08:48:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:42.209 08:48:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:42.209 08:48:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:42.209 08:48:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:42.209 08:48:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.209 08:48:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.209 08:48:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:42.209 08:48:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:42.209 08:48:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.209 08:48:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.209 08:48:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:42.467 08:48:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:42.467 08:48:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:42.467 08:48:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:42.467 08:48:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.467 08:48:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.467 08:48:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:42.467 08:48:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:42.467 08:48:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.467 08:48:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:42.467 08:48:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.467 08:48:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:42.726 08:48:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:42.726 08:48:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:42.726 08:48:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:42.984 08:48:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:42.984 08:48:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:42.984 08:48:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:42.984 08:48:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:42.984 08:48:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:42.984 08:48:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:42.984 08:48:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:42.984 08:48:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:42.984 08:48:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:42.984 08:48:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:43.242 08:48:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:43.500 [2024-05-15 08:48:59.500292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.500 [2024-05-15 08:48:59.559250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.500 [2024-05-15 08:48:59.559264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.500 [2024-05-15 08:48:59.593107] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:43.500 [2024-05-15 08:48:59.593173] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:46.783 08:49:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:46.783 spdk_app_start Round 1 00:08:46.783 08:49:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:46.783 08:49:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61577 /var/tmp/spdk-nbd.sock 00:08:46.783 08:49:02 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61577 ']' 00:08:46.783 08:49:02 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:46.783 08:49:02 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:46.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:46.783 08:49:02 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:46.783 08:49:02 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:46.783 08:49:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:46.784 08:49:02 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:46.784 08:49:02 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:08:46.784 08:49:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:46.784 Malloc0 00:08:46.784 08:49:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:47.042 Malloc1 00:08:47.042 08:49:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.042 08:49:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:47.300 /dev/nbd0 00:08:47.559 08:49:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:47.559 08:49:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:47.559 1+0 records in 00:08:47.559 1+0 records out 00:08:47.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290054 s, 14.1 MB/s 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:47.559 08:49:03 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:08:47.559 08:49:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.559 08:49:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.559 08:49:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:47.818 /dev/nbd1 00:08:47.818 08:49:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:47.818 08:49:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:47.818 1+0 records in 00:08:47.818 1+0 records out 00:08:47.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374431 s, 10.9 MB/s 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:47.818 08:49:03 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:08:47.818 08:49:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.818 08:49:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.818 08:49:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.818 08:49:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.818 08:49:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:48.076 { 00:08:48.076 "bdev_name": "Malloc0", 00:08:48.076 "nbd_device": "/dev/nbd0" 00:08:48.076 }, 00:08:48.076 { 00:08:48.076 "bdev_name": "Malloc1", 00:08:48.076 "nbd_device": "/dev/nbd1" 00:08:48.076 } 00:08:48.076 ]' 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:48.076 { 00:08:48.076 "bdev_name": "Malloc0", 00:08:48.076 "nbd_device": "/dev/nbd0" 00:08:48.076 }, 00:08:48.076 { 00:08:48.076 "bdev_name": "Malloc1", 00:08:48.076 "nbd_device": "/dev/nbd1" 00:08:48.076 } 00:08:48.076 ]' 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:48.076 /dev/nbd1' 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:48.076 /dev/nbd1' 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:48.076 256+0 records in 00:08:48.076 256+0 records out 00:08:48.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00754825 s, 139 MB/s 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:48.076 256+0 records in 00:08:48.076 256+0 records out 00:08:48.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033667 s, 31.1 MB/s 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:48.076 256+0 records in 00:08:48.076 256+0 records out 00:08:48.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0324065 s, 32.4 MB/s 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.076 08:49:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:48.640 08:49:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:48.640 08:49:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:48.640 08:49:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:48.640 08:49:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.640 08:49:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.640 08:49:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:48.640 08:49:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:48.640 08:49:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.640 08:49:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.640 08:49:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:48.898 08:49:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:48.898 08:49:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:48.898 08:49:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:48.898 08:49:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.898 08:49:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.898 08:49:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:48.898 08:49:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:48.898 08:49:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.898 08:49:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.898 08:49:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.898 08:49:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:49.156 08:49:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:49.156 08:49:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:49.415 08:49:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:49.673 [2024-05-15 08:49:05.792989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.673 [2024-05-15 08:49:05.855434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.673 [2024-05-15 08:49:05.855438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.673 [2024-05-15 08:49:05.888665] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:49.673 [2024-05-15 08:49:05.888739] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:52.957 08:49:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:52.957 spdk_app_start Round 2 00:08:52.957 08:49:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:52.957 08:49:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61577 /var/tmp/spdk-nbd.sock 00:08:52.957 08:49:08 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61577 ']' 00:08:52.957 08:49:08 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:52.957 08:49:08 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:52.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:52.957 08:49:08 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:52.957 08:49:08 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:52.957 08:49:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:52.957 08:49:08 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:52.957 08:49:08 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:08:52.957 08:49:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:52.957 Malloc0 00:08:53.216 08:49:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:53.474 Malloc1 00:08:53.474 08:49:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:53.474 08:49:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:53.733 /dev/nbd0 00:08:53.733 08:49:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:53.733 08:49:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:53.733 1+0 records in 00:08:53.733 1+0 records out 00:08:53.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260335 s, 15.7 MB/s 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:53.733 08:49:09 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:08:53.733 08:49:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.733 08:49:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:53.733 08:49:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:54.300 /dev/nbd1 00:08:54.300 08:49:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:54.300 08:49:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:54.300 1+0 records in 00:08:54.300 1+0 records out 00:08:54.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464634 s, 8.8 MB/s 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:54.300 08:49:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:08:54.300 08:49:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.300 08:49:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:54.300 08:49:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:54.300 08:49:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.300 08:49:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:54.558 { 00:08:54.558 "bdev_name": "Malloc0", 00:08:54.558 "nbd_device": "/dev/nbd0" 00:08:54.558 }, 00:08:54.558 { 00:08:54.558 "bdev_name": "Malloc1", 00:08:54.558 "nbd_device": "/dev/nbd1" 00:08:54.558 } 00:08:54.558 ]' 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:54.558 { 00:08:54.558 "bdev_name": "Malloc0", 00:08:54.558 "nbd_device": "/dev/nbd0" 00:08:54.558 }, 00:08:54.558 { 00:08:54.558 "bdev_name": "Malloc1", 00:08:54.558 "nbd_device": "/dev/nbd1" 00:08:54.558 } 00:08:54.558 ]' 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:54.558 /dev/nbd1' 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:54.558 /dev/nbd1' 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:54.558 256+0 records in 00:08:54.558 256+0 records out 00:08:54.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00690443 s, 152 MB/s 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:54.558 256+0 records in 00:08:54.558 256+0 records out 00:08:54.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263577 s, 39.8 MB/s 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:54.558 256+0 records in 00:08:54.558 256+0 records out 00:08:54.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283486 s, 37.0 MB/s 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:54.558 08:49:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.559 08:49:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.559 08:49:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:54.559 08:49:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:54.559 08:49:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.559 08:49:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:55.123 08:49:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:55.123 08:49:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:55.123 08:49:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:55.123 08:49:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.123 08:49:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.123 08:49:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:55.123 08:49:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:55.123 08:49:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.123 08:49:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.123 08:49:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:55.380 08:49:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:55.380 08:49:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:55.380 08:49:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:55.380 08:49:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.380 08:49:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.380 08:49:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:55.380 08:49:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:55.380 08:49:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.380 08:49:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:55.380 08:49:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.380 08:49:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:55.638 08:49:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:55.638 08:49:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:55.638 08:49:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:55.638 08:49:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:55.638 08:49:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:55.638 08:49:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:55.895 08:49:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:55.895 08:49:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:55.895 08:49:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:55.895 08:49:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:55.895 08:49:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:55.895 08:49:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:55.895 08:49:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:56.153 08:49:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:56.153 [2024-05-15 08:49:12.339396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:56.411 [2024-05-15 08:49:12.400077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.411 [2024-05-15 08:49:12.400090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.411 [2024-05-15 08:49:12.431535] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:56.411 [2024-05-15 08:49:12.431634] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:59.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:59.691 08:49:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61577 /var/tmp/spdk-nbd.sock 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61577 ']' 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:08:59.691 08:49:15 event.app_repeat -- event/event.sh@39 -- # killprocess 61577 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 61577 ']' 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 61577 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61577 00:08:59.691 killing process with pid 61577 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61577' 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@965 -- # kill 61577 00:08:59.691 08:49:15 event.app_repeat -- common/autotest_common.sh@970 -- # wait 61577 00:08:59.691 spdk_app_start is called in Round 0. 00:08:59.691 Shutdown signal received, stop current app iteration 00:08:59.691 Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 reinitialization... 00:08:59.691 spdk_app_start is called in Round 1. 00:08:59.691 Shutdown signal received, stop current app iteration 00:08:59.691 Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 reinitialization... 00:08:59.691 spdk_app_start is called in Round 2. 00:08:59.691 Shutdown signal received, stop current app iteration 00:08:59.691 Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 reinitialization... 00:08:59.691 spdk_app_start is called in Round 3. 00:08:59.692 Shutdown signal received, stop current app iteration 00:08:59.692 08:49:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:59.692 08:49:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:59.692 00:08:59.692 real 0m19.630s 00:08:59.692 user 0m44.940s 00:08:59.692 sys 0m2.939s 00:08:59.692 08:49:15 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:59.692 ************************************ 00:08:59.692 END TEST app_repeat 00:08:59.692 ************************************ 00:08:59.692 08:49:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:59.692 08:49:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:59.692 08:49:15 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:59.692 08:49:15 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:59.692 08:49:15 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:59.692 08:49:15 event -- common/autotest_common.sh@10 -- # set +x 00:08:59.692 ************************************ 00:08:59.692 START TEST cpu_locks 00:08:59.692 ************************************ 00:08:59.692 08:49:15 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:59.951 * Looking for test storage... 00:08:59.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:59.951 08:49:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:59.951 08:49:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:59.951 08:49:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:59.951 08:49:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:59.951 08:49:15 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:59.951 08:49:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:59.951 08:49:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:59.951 ************************************ 00:08:59.951 START TEST default_locks 00:08:59.951 ************************************ 00:08:59.951 08:49:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:08:59.951 08:49:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62200 00:08:59.951 08:49:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62200 00:08:59.951 08:49:15 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 62200 ']' 00:08:59.951 08:49:15 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.951 08:49:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:59.951 08:49:15 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:59.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.951 08:49:15 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.951 08:49:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:59.951 08:49:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:59.951 [2024-05-15 08:49:16.043825] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:59.951 [2024-05-15 08:49:16.043956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62200 ] 00:09:00.208 [2024-05-15 08:49:16.183629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.208 [2024-05-15 08:49:16.269710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.142 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:01.142 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:09:01.142 08:49:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62200 00:09:01.142 08:49:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62200 00:09:01.142 08:49:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:01.401 08:49:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62200 00:09:01.401 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 62200 ']' 00:09:01.402 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 62200 00:09:01.402 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:09:01.402 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:01.402 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62200 00:09:01.402 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:01.402 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:01.402 killing process with pid 62200 00:09:01.402 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62200' 00:09:01.402 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 62200 00:09:01.402 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 62200 00:09:01.659 08:49:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62200 00:09:01.659 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:09:01.659 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62200 00:09:01.659 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62200 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 62200 ']' 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:01.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:01.660 ERROR: process (pid: 62200) is no longer running 00:09:01.660 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (62200) - No such process 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:01.660 00:09:01.660 real 0m1.898s 00:09:01.660 user 0m2.201s 00:09:01.660 sys 0m0.500s 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:01.660 ************************************ 00:09:01.660 END TEST default_locks 00:09:01.660 ************************************ 00:09:01.660 08:49:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:01.660 08:49:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:01.660 08:49:17 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:01.660 08:49:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:01.660 08:49:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:01.918 ************************************ 00:09:01.918 START TEST default_locks_via_rpc 00:09:01.918 ************************************ 00:09:01.918 08:49:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:09:01.918 08:49:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:01.918 08:49:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62263 00:09:01.918 08:49:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62263 00:09:01.918 08:49:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62263 ']' 00:09:01.918 08:49:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.918 08:49:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:01.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.918 08:49:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.918 08:49:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:01.918 08:49:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.918 [2024-05-15 08:49:17.953344] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:01.918 [2024-05-15 08:49:17.953455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62263 ] 00:09:01.918 [2024-05-15 08:49:18.086943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.177 [2024-05-15 08:49:18.168404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62263 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62263 00:09:02.744 08:49:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:03.311 08:49:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62263 00:09:03.311 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 62263 ']' 00:09:03.311 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 62263 00:09:03.311 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:09:03.311 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:03.311 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62263 00:09:03.311 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:03.311 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:03.311 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62263' 00:09:03.311 killing process with pid 62263 00:09:03.311 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 62263 00:09:03.311 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 62263 00:09:03.570 00:09:03.570 real 0m1.861s 00:09:03.570 user 0m2.115s 00:09:03.570 sys 0m0.498s 00:09:03.570 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:03.570 08:49:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.570 ************************************ 00:09:03.570 END TEST default_locks_via_rpc 00:09:03.570 ************************************ 00:09:03.570 08:49:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:03.570 08:49:19 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:03.570 08:49:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:03.570 08:49:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:03.570 ************************************ 00:09:03.570 START TEST non_locking_app_on_locked_coremask 00:09:03.570 ************************************ 00:09:03.570 08:49:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:09:03.828 08:49:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62327 00:09:03.828 08:49:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62327 /var/tmp/spdk.sock 00:09:03.828 08:49:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62327 ']' 00:09:03.828 08:49:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.828 08:49:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:03.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.828 08:49:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:03.828 08:49:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.828 08:49:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:03.828 08:49:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:03.828 [2024-05-15 08:49:19.854914] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:03.828 [2024-05-15 08:49:19.855406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62327 ] 00:09:03.828 [2024-05-15 08:49:19.988768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.828 [2024-05-15 08:49:20.050248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.761 08:49:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:04.761 08:49:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:04.761 08:49:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62355 00:09:04.761 08:49:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:04.761 08:49:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62355 /var/tmp/spdk2.sock 00:09:04.761 08:49:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62355 ']' 00:09:04.761 08:49:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:04.761 08:49:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:04.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:04.761 08:49:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:04.761 08:49:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:04.761 08:49:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:04.761 [2024-05-15 08:49:20.942917] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:04.761 [2024-05-15 08:49:20.943062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62355 ] 00:09:05.018 [2024-05-15 08:49:21.094171] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:05.018 [2024-05-15 08:49:21.094261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.018 [2024-05-15 08:49:21.220186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.951 08:49:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:05.951 08:49:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:05.951 08:49:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62327 00:09:05.951 08:49:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:05.951 08:49:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62327 00:09:06.883 08:49:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62327 00:09:06.883 08:49:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62327 ']' 00:09:06.883 08:49:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 62327 00:09:06.883 08:49:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:09:06.883 08:49:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:06.883 08:49:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62327 00:09:06.883 08:49:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:06.883 killing process with pid 62327 00:09:06.883 08:49:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:06.883 08:49:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62327' 00:09:06.883 08:49:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 62327 00:09:06.883 08:49:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 62327 00:09:07.448 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62355 00:09:07.448 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62355 ']' 00:09:07.448 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 62355 00:09:07.448 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:09:07.448 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:07.448 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62355 00:09:07.448 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:07.448 killing process with pid 62355 00:09:07.448 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:07.448 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62355' 00:09:07.448 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 62355 00:09:07.448 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 62355 00:09:07.707 00:09:07.707 real 0m3.926s 00:09:07.707 user 0m4.709s 00:09:07.707 sys 0m0.935s 00:09:07.707 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:07.707 08:49:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:07.707 ************************************ 00:09:07.707 END TEST non_locking_app_on_locked_coremask 00:09:07.707 ************************************ 00:09:07.707 08:49:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:07.707 08:49:23 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:07.707 08:49:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:07.707 08:49:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:07.707 ************************************ 00:09:07.707 START TEST locking_app_on_unlocked_coremask 00:09:07.707 ************************************ 00:09:07.707 08:49:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:09:07.707 08:49:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62434 00:09:07.707 08:49:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 62434 /var/tmp/spdk.sock 00:09:07.707 08:49:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:07.707 08:49:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62434 ']' 00:09:07.707 08:49:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.707 08:49:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:07.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.707 08:49:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.707 08:49:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:07.707 08:49:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:07.707 [2024-05-15 08:49:23.825490] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:07.707 [2024-05-15 08:49:23.825601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62434 ] 00:09:07.965 [2024-05-15 08:49:23.961165] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:07.965 [2024-05-15 08:49:23.961251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.965 [2024-05-15 08:49:24.054430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:08.224 08:49:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:08.224 08:49:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:08.224 08:49:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62449 00:09:08.224 08:49:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62449 /var/tmp/spdk2.sock 00:09:08.224 08:49:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:08.224 08:49:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62449 ']' 00:09:08.224 08:49:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:08.224 08:49:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:08.224 08:49:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:08.224 08:49:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:08.224 08:49:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.224 [2024-05-15 08:49:24.296433] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:08.224 [2024-05-15 08:49:24.296804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62449 ] 00:09:08.224 [2024-05-15 08:49:24.439941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.482 [2024-05-15 08:49:24.568454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.419 08:49:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:09.419 08:49:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:09.419 08:49:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62449 00:09:09.419 08:49:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62449 00:09:09.419 08:49:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:09.986 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 62434 00:09:09.986 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62434 ']' 00:09:09.986 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 62434 00:09:09.986 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:09:09.986 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:09.986 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62434 00:09:09.986 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:09.986 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:09.986 killing process with pid 62434 00:09:09.986 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62434' 00:09:09.986 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 62434 00:09:09.986 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 62434 00:09:10.920 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62449 00:09:10.920 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62449 ']' 00:09:10.920 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 62449 00:09:10.920 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:09:10.920 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:10.920 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62449 00:09:10.920 killing process with pid 62449 00:09:10.920 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:10.920 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:10.920 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62449' 00:09:10.920 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 62449 00:09:10.920 08:49:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 62449 00:09:10.920 ************************************ 00:09:10.920 END TEST locking_app_on_unlocked_coremask 00:09:10.920 ************************************ 00:09:10.920 00:09:10.920 real 0m3.370s 00:09:10.920 user 0m3.985s 00:09:10.920 sys 0m0.912s 00:09:10.920 08:49:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.920 08:49:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:11.178 08:49:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:11.178 08:49:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:11.178 08:49:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:11.178 08:49:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:11.178 ************************************ 00:09:11.178 START TEST locking_app_on_locked_coremask 00:09:11.178 ************************************ 00:09:11.178 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:09:11.178 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62528 00:09:11.178 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:11.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.178 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 62528 /var/tmp/spdk.sock 00:09:11.178 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62528 ']' 00:09:11.178 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.178 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:11.178 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.178 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:11.178 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:11.178 [2024-05-15 08:49:27.241924] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:11.178 [2024-05-15 08:49:27.242023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62528 ] 00:09:11.178 [2024-05-15 08:49:27.373280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.436 [2024-05-15 08:49:27.460851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62542 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62542 /var/tmp/spdk2.sock 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62542 /var/tmp/spdk2.sock 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:11.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62542 /var/tmp/spdk2.sock 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62542 ']' 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:11.436 08:49:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:11.695 [2024-05-15 08:49:27.724915] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:11.695 [2024-05-15 08:49:27.725057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62542 ] 00:09:11.695 [2024-05-15 08:49:27.878148] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62528 has claimed it. 00:09:11.695 [2024-05-15 08:49:27.878268] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:12.628 ERROR: process (pid: 62542) is no longer running 00:09:12.628 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (62542) - No such process 00:09:12.628 08:49:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:12.628 08:49:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:09:12.628 08:49:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:09:12.628 08:49:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:12.628 08:49:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:12.628 08:49:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:12.628 08:49:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 62528 00:09:12.628 08:49:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62528 00:09:12.628 08:49:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:12.892 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 62528 00:09:12.892 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62528 ']' 00:09:12.892 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 62528 00:09:12.892 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:09:12.892 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:12.892 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62528 00:09:12.892 killing process with pid 62528 00:09:12.892 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:12.892 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:12.892 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62528' 00:09:12.892 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 62528 00:09:12.892 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 62528 00:09:13.150 ************************************ 00:09:13.150 END TEST locking_app_on_locked_coremask 00:09:13.150 ************************************ 00:09:13.150 00:09:13.150 real 0m2.196s 00:09:13.150 user 0m2.666s 00:09:13.150 sys 0m0.564s 00:09:13.150 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:13.150 08:49:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:13.408 08:49:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:13.408 08:49:29 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:13.408 08:49:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:13.408 08:49:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:13.408 ************************************ 00:09:13.408 START TEST locking_overlapped_coremask 00:09:13.408 ************************************ 00:09:13.408 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:09:13.408 08:49:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62594 00:09:13.408 08:49:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:13.408 08:49:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62594 /var/tmp/spdk.sock 00:09:13.408 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 62594 ']' 00:09:13.408 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.408 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:13.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.408 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.408 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:13.408 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:13.408 [2024-05-15 08:49:29.503147] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:13.409 [2024-05-15 08:49:29.503292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62594 ] 00:09:13.667 [2024-05-15 08:49:29.643505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:13.667 [2024-05-15 08:49:29.732863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.667 [2024-05-15 08:49:29.732965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.667 [2024-05-15 08:49:29.732988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62610 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62610 /var/tmp/spdk2.sock 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62610 /var/tmp/spdk2.sock 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62610 /var/tmp/spdk2.sock 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 62610 ']' 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:13.925 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:13.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:13.926 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:13.926 08:49:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:13.926 [2024-05-15 08:49:29.997282] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:13.926 [2024-05-15 08:49:29.997986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62610 ] 00:09:13.926 [2024-05-15 08:49:30.148365] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62594 has claimed it. 00:09:13.926 [2024-05-15 08:49:30.148453] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:14.860 ERROR: process (pid: 62610) is no longer running 00:09:14.860 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (62610) - No such process 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62594 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 62594 ']' 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 62594 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62594 00:09:14.860 killing process with pid 62594 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62594' 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 62594 00:09:14.860 08:49:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 62594 00:09:15.118 00:09:15.118 real 0m1.770s 00:09:15.118 user 0m4.888s 00:09:15.118 sys 0m0.353s 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:15.118 ************************************ 00:09:15.118 END TEST locking_overlapped_coremask 00:09:15.118 ************************************ 00:09:15.118 08:49:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:15.118 08:49:31 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:15.118 08:49:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:15.118 08:49:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.118 ************************************ 00:09:15.118 START TEST locking_overlapped_coremask_via_rpc 00:09:15.118 ************************************ 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62662 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62662 /var/tmp/spdk.sock 00:09:15.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62662 ']' 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:15.118 08:49:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.118 [2024-05-15 08:49:31.291752] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:15.119 [2024-05-15 08:49:31.292073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62662 ] 00:09:15.377 [2024-05-15 08:49:31.424385] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:15.377 [2024-05-15 08:49:31.424844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:15.377 [2024-05-15 08:49:31.513594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.377 [2024-05-15 08:49:31.513709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.377 [2024-05-15 08:49:31.513725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.311 08:49:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:16.311 08:49:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:16.311 08:49:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62692 00:09:16.311 08:49:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:16.311 08:49:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62692 /var/tmp/spdk2.sock 00:09:16.311 08:49:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62692 ']' 00:09:16.311 08:49:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:16.311 08:49:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:16.311 08:49:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:16.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:16.311 08:49:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:16.311 08:49:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.311 [2024-05-15 08:49:32.358274] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:16.311 [2024-05-15 08:49:32.358921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62692 ] 00:09:16.568 [2024-05-15 08:49:32.553638] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:16.568 [2024-05-15 08:49:32.553728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:16.568 [2024-05-15 08:49:32.728989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.568 [2024-05-15 08:49:32.729048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:16.568 [2024-05-15 08:49:32.729051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.511 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:17.511 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:17.511 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:17.511 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.511 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.511 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.512 [2024-05-15 08:49:33.603895] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62662 has claimed it. 00:09:17.512 2024/05/15 08:49:33 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:09:17.512 request: 00:09:17.512 { 00:09:17.512 "method": "framework_enable_cpumask_locks", 00:09:17.512 "params": {} 00:09:17.512 } 00:09:17.512 Got JSON-RPC error response 00:09:17.512 GoRPCClient: error on JSON-RPC call 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62662 /var/tmp/spdk.sock 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62662 ']' 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:17.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:17.512 08:49:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.100 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:18.100 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:18.100 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62692 /var/tmp/spdk2.sock 00:09:18.100 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62692 ']' 00:09:18.100 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:18.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:18.100 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:18.100 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:18.100 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:18.100 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.358 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:18.358 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:18.358 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:18.358 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:18.358 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:18.358 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:18.358 00:09:18.358 real 0m3.255s 00:09:18.358 user 0m1.930s 00:09:18.358 sys 0m0.250s 00:09:18.358 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:18.358 08:49:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.358 ************************************ 00:09:18.358 END TEST locking_overlapped_coremask_via_rpc 00:09:18.358 ************************************ 00:09:18.358 08:49:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:18.358 08:49:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62662 ]] 00:09:18.358 08:49:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62662 00:09:18.358 08:49:34 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62662 ']' 00:09:18.358 08:49:34 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62662 00:09:18.358 08:49:34 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:09:18.358 08:49:34 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:18.358 08:49:34 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62662 00:09:18.358 08:49:34 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:18.358 killing process with pid 62662 00:09:18.358 08:49:34 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:18.358 08:49:34 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62662' 00:09:18.358 08:49:34 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 62662 00:09:18.358 08:49:34 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 62662 00:09:18.924 08:49:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62692 ]] 00:09:18.924 08:49:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62692 00:09:18.924 08:49:34 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62692 ']' 00:09:18.924 08:49:34 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62692 00:09:18.924 08:49:34 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:09:18.924 08:49:34 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:18.924 08:49:34 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62692 00:09:18.924 08:49:34 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:09:18.924 08:49:34 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:09:18.924 killing process with pid 62692 00:09:18.924 08:49:34 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62692' 00:09:18.924 08:49:34 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 62692 00:09:18.924 08:49:34 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 62692 00:09:19.182 08:49:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:19.182 08:49:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:19.182 08:49:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62662 ]] 00:09:19.182 08:49:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62662 00:09:19.182 08:49:35 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62662 ']' 00:09:19.182 08:49:35 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62662 00:09:19.182 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (62662) - No such process 00:09:19.182 Process with pid 62662 is not found 00:09:19.182 08:49:35 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 62662 is not found' 00:09:19.182 08:49:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62692 ]] 00:09:19.182 08:49:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62692 00:09:19.182 08:49:35 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62692 ']' 00:09:19.182 08:49:35 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62692 00:09:19.182 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (62692) - No such process 00:09:19.182 Process with pid 62692 is not found 00:09:19.182 08:49:35 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 62692 is not found' 00:09:19.182 08:49:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:19.182 ************************************ 00:09:19.182 END TEST cpu_locks 00:09:19.182 ************************************ 00:09:19.182 00:09:19.182 real 0m19.361s 00:09:19.182 user 0m37.495s 00:09:19.182 sys 0m4.635s 00:09:19.182 08:49:35 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:19.182 08:49:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.182 00:09:19.182 real 0m47.859s 00:09:19.182 user 1m37.843s 00:09:19.182 sys 0m8.246s 00:09:19.182 08:49:35 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:19.182 08:49:35 event -- common/autotest_common.sh@10 -- # set +x 00:09:19.182 ************************************ 00:09:19.182 END TEST event 00:09:19.182 ************************************ 00:09:19.182 08:49:35 -- spdk/autotest.sh@191 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:19.182 08:49:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:19.182 08:49:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.182 08:49:35 -- common/autotest_common.sh@10 -- # set +x 00:09:19.182 ************************************ 00:09:19.182 START TEST thread 00:09:19.182 ************************************ 00:09:19.182 08:49:35 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:19.182 * Looking for test storage... 00:09:19.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:19.182 08:49:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:19.182 08:49:35 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:09:19.182 08:49:35 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.182 08:49:35 thread -- common/autotest_common.sh@10 -- # set +x 00:09:19.182 ************************************ 00:09:19.182 START TEST thread_poller_perf 00:09:19.182 ************************************ 00:09:19.182 08:49:35 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:19.182 [2024-05-15 08:49:35.399074] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:19.182 [2024-05-15 08:49:35.399157] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62844 ] 00:09:19.440 [2024-05-15 08:49:35.531361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.440 [2024-05-15 08:49:35.593695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.440 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:20.812 ====================================== 00:09:20.812 busy:2215804609 (cyc) 00:09:20.812 total_run_count: 262000 00:09:20.812 tsc_hz: 2200000000 (cyc) 00:09:20.812 ====================================== 00:09:20.812 poller_cost: 8457 (cyc), 3844 (nsec) 00:09:20.812 00:09:20.812 real 0m1.325s 00:09:20.812 user 0m1.176s 00:09:20.812 sys 0m0.036s 00:09:20.812 08:49:36 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:20.812 08:49:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:20.812 ************************************ 00:09:20.812 END TEST thread_poller_perf 00:09:20.812 ************************************ 00:09:20.812 08:49:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:20.812 08:49:36 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:09:20.812 08:49:36 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:20.812 08:49:36 thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.812 ************************************ 00:09:20.812 START TEST thread_poller_perf 00:09:20.812 ************************************ 00:09:20.812 08:49:36 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:20.812 [2024-05-15 08:49:36.771533] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:20.812 [2024-05-15 08:49:36.771729] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62874 ] 00:09:20.812 [2024-05-15 08:49:36.909525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.812 [2024-05-15 08:49:36.975947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.812 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:22.186 ====================================== 00:09:22.186 busy:2202391173 (cyc) 00:09:22.186 total_run_count: 3973000 00:09:22.186 tsc_hz: 2200000000 (cyc) 00:09:22.186 ====================================== 00:09:22.186 poller_cost: 554 (cyc), 251 (nsec) 00:09:22.186 00:09:22.186 real 0m1.328s 00:09:22.186 user 0m1.170s 00:09:22.186 sys 0m0.048s 00:09:22.186 08:49:38 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.186 08:49:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:22.186 ************************************ 00:09:22.186 END TEST thread_poller_perf 00:09:22.186 ************************************ 00:09:22.186 08:49:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:22.186 ************************************ 00:09:22.186 END TEST thread 00:09:22.186 ************************************ 00:09:22.186 00:09:22.186 real 0m2.806s 00:09:22.186 user 0m2.398s 00:09:22.186 sys 0m0.178s 00:09:22.186 08:49:38 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.186 08:49:38 thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.186 08:49:38 -- spdk/autotest.sh@192 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:22.186 08:49:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:22.186 08:49:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.186 08:49:38 -- common/autotest_common.sh@10 -- # set +x 00:09:22.186 ************************************ 00:09:22.186 START TEST accel 00:09:22.186 ************************************ 00:09:22.186 08:49:38 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:22.186 * Looking for test storage... 00:09:22.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:22.186 08:49:38 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:09:22.186 08:49:38 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:09:22.186 08:49:38 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:22.186 08:49:38 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=62948 00:09:22.186 08:49:38 accel -- accel/accel.sh@63 -- # waitforlisten 62948 00:09:22.186 08:49:38 accel -- common/autotest_common.sh@827 -- # '[' -z 62948 ']' 00:09:22.186 08:49:38 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:22.186 08:49:38 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.186 08:49:38 accel -- accel/accel.sh@61 -- # build_accel_config 00:09:22.186 08:49:38 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:22.186 08:49:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:22.186 08:49:38 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.186 08:49:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:22.186 08:49:38 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:22.186 08:49:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:22.186 08:49:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:22.186 08:49:38 accel -- common/autotest_common.sh@10 -- # set +x 00:09:22.186 08:49:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:22.186 08:49:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:22.186 08:49:38 accel -- accel/accel.sh@41 -- # jq -r . 00:09:22.186 [2024-05-15 08:49:38.310604] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:22.187 [2024-05-15 08:49:38.310743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62948 ] 00:09:22.445 [2024-05-15 08:49:38.452258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.445 [2024-05-15 08:49:38.538260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@860 -- # return 0 00:09:23.381 08:49:39 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:23.381 08:49:39 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:23.381 08:49:39 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:23.381 08:49:39 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:09:23.381 08:49:39 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:23.381 08:49:39 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.381 08:49:39 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@10 -- # set +x 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # IFS== 00:09:23.381 08:49:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:23.381 08:49:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:23.381 08:49:39 accel -- accel/accel.sh@75 -- # killprocess 62948 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@946 -- # '[' -z 62948 ']' 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@950 -- # kill -0 62948 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@951 -- # uname 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62948 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:23.381 killing process with pid 62948 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62948' 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@965 -- # kill 62948 00:09:23.381 08:49:39 accel -- common/autotest_common.sh@970 -- # wait 62948 00:09:23.640 08:49:39 accel -- accel/accel.sh@76 -- # trap - ERR 00:09:23.640 08:49:39 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:09:23.640 08:49:39 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:23.640 08:49:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:23.640 08:49:39 accel -- common/autotest_common.sh@10 -- # set +x 00:09:23.640 08:49:39 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:09:23.640 08:49:39 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:23.640 08:49:39 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:09:23.640 08:49:39 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:23.640 08:49:39 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:23.640 08:49:39 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:23.640 08:49:39 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:23.640 08:49:39 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:23.640 08:49:39 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:09:23.640 08:49:39 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:09:23.640 08:49:39 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:23.640 08:49:39 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:09:23.640 08:49:39 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:23.640 08:49:39 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:23.640 08:49:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:23.640 08:49:39 accel -- common/autotest_common.sh@10 -- # set +x 00:09:23.640 ************************************ 00:09:23.640 START TEST accel_missing_filename 00:09:23.640 ************************************ 00:09:23.640 08:49:39 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:09:23.640 08:49:39 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:09:23.640 08:49:39 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:23.641 08:49:39 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:23.641 08:49:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.641 08:49:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:23.641 08:49:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.641 08:49:39 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:09:23.641 08:49:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:23.641 08:49:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:09:23.641 08:49:39 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:23.641 08:49:39 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:23.641 08:49:39 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:23.641 08:49:39 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:23.641 08:49:39 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:23.641 08:49:39 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:09:23.641 08:49:39 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:09:23.641 [2024-05-15 08:49:39.862533] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:23.641 [2024-05-15 08:49:39.862686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63019 ] 00:09:23.899 [2024-05-15 08:49:40.001894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.899 [2024-05-15 08:49:40.089924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.899 [2024-05-15 08:49:40.126678] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:24.158 [2024-05-15 08:49:40.173500] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:09:24.158 A filename is required. 00:09:24.158 08:49:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:09:24.158 08:49:40 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:24.158 08:49:40 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:09:24.158 08:49:40 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:09:24.158 08:49:40 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:09:24.158 08:49:40 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:24.158 00:09:24.158 real 0m0.470s 00:09:24.158 user 0m0.332s 00:09:24.158 sys 0m0.089s 00:09:24.158 08:49:40 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:24.158 08:49:40 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:09:24.158 ************************************ 00:09:24.158 END TEST accel_missing_filename 00:09:24.158 ************************************ 00:09:24.158 08:49:40 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:24.158 08:49:40 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:09:24.158 08:49:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:24.158 08:49:40 accel -- common/autotest_common.sh@10 -- # set +x 00:09:24.158 ************************************ 00:09:24.158 START TEST accel_compress_verify 00:09:24.158 ************************************ 00:09:24.158 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:24.158 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:09:24.158 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:24.158 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:24.158 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:24.158 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:24.158 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:24.158 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:24.158 08:49:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:24.158 08:49:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:24.158 08:49:40 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:24.158 08:49:40 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:24.158 08:49:40 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.158 08:49:40 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.158 08:49:40 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:24.158 08:49:40 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:24.158 08:49:40 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:09:24.158 [2024-05-15 08:49:40.369400] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:24.158 [2024-05-15 08:49:40.369525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63049 ] 00:09:24.416 [2024-05-15 08:49:40.508109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.416 [2024-05-15 08:49:40.596675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.416 [2024-05-15 08:49:40.634706] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:24.676 [2024-05-15 08:49:40.682849] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:09:24.676 00:09:24.676 Compression does not support the verify option, aborting. 00:09:24.676 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:09:24.676 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:24.676 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:09:24.676 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:09:24.676 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:09:24.676 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:24.676 00:09:24.676 real 0m0.474s 00:09:24.676 user 0m0.335s 00:09:24.676 sys 0m0.089s 00:09:24.676 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:24.676 08:49:40 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:09:24.676 ************************************ 00:09:24.676 END TEST accel_compress_verify 00:09:24.676 ************************************ 00:09:24.676 08:49:40 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:24.676 08:49:40 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:24.676 08:49:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:24.676 08:49:40 accel -- common/autotest_common.sh@10 -- # set +x 00:09:24.676 ************************************ 00:09:24.676 START TEST accel_wrong_workload 00:09:24.676 ************************************ 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:09:24.676 08:49:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:24.676 08:49:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:09:24.676 08:49:40 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:24.676 08:49:40 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:24.676 08:49:40 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.676 08:49:40 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.676 08:49:40 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:24.676 08:49:40 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:09:24.676 08:49:40 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:09:24.676 Unsupported workload type: foobar 00:09:24.676 [2024-05-15 08:49:40.884038] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:24.676 accel_perf options: 00:09:24.676 [-h help message] 00:09:24.676 [-q queue depth per core] 00:09:24.676 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:24.676 [-T number of threads per core 00:09:24.676 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:24.676 [-t time in seconds] 00:09:24.676 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:24.676 [ dif_verify, , dif_generate, dif_generate_copy 00:09:24.676 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:24.676 [-l for compress/decompress workloads, name of uncompressed input file 00:09:24.676 [-S for crc32c workload, use this seed value (default 0) 00:09:24.676 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:24.676 [-f for fill workload, use this BYTE value (default 255) 00:09:24.676 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:24.676 [-y verify result if this switch is on] 00:09:24.676 [-a tasks to allocate per core (default: same value as -q)] 00:09:24.676 Can be used to spread operations across a wider range of memory. 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:24.676 00:09:24.676 real 0m0.033s 00:09:24.676 user 0m0.022s 00:09:24.676 sys 0m0.011s 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:24.676 08:49:40 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:09:24.676 ************************************ 00:09:24.676 END TEST accel_wrong_workload 00:09:24.676 ************************************ 00:09:24.936 08:49:40 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:24.936 08:49:40 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:09:24.936 08:49:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:24.936 08:49:40 accel -- common/autotest_common.sh@10 -- # set +x 00:09:24.936 ************************************ 00:09:24.936 START TEST accel_negative_buffers 00:09:24.936 ************************************ 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:09:24.936 08:49:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:24.936 08:49:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:09:24.936 08:49:40 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:24.936 08:49:40 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:24.936 08:49:40 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.936 08:49:40 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.936 08:49:40 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:24.936 08:49:40 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:09:24.936 08:49:40 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:09:24.936 -x option must be non-negative. 00:09:24.936 [2024-05-15 08:49:40.955282] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:24.936 accel_perf options: 00:09:24.936 [-h help message] 00:09:24.936 [-q queue depth per core] 00:09:24.936 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:24.936 [-T number of threads per core 00:09:24.936 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:24.936 [-t time in seconds] 00:09:24.936 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:24.936 [ dif_verify, , dif_generate, dif_generate_copy 00:09:24.936 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:24.936 [-l for compress/decompress workloads, name of uncompressed input file 00:09:24.936 [-S for crc32c workload, use this seed value (default 0) 00:09:24.936 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:24.936 [-f for fill workload, use this BYTE value (default 255) 00:09:24.936 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:24.936 [-y verify result if this switch is on] 00:09:24.936 [-a tasks to allocate per core (default: same value as -q)] 00:09:24.936 Can be used to spread operations across a wider range of memory. 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:24.936 00:09:24.936 real 0m0.027s 00:09:24.936 user 0m0.016s 00:09:24.936 sys 0m0.011s 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:24.936 08:49:40 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:09:24.936 ************************************ 00:09:24.936 END TEST accel_negative_buffers 00:09:24.936 ************************************ 00:09:24.936 08:49:40 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:24.936 08:49:40 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:24.936 08:49:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:24.936 08:49:40 accel -- common/autotest_common.sh@10 -- # set +x 00:09:24.936 ************************************ 00:09:24.936 START TEST accel_crc32c 00:09:24.936 ************************************ 00:09:24.936 08:49:41 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:24.936 08:49:41 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:24.936 [2024-05-15 08:49:41.025711] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:24.936 [2024-05-15 08:49:41.025840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63102 ] 00:09:24.936 [2024-05-15 08:49:41.168091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.196 [2024-05-15 08:49:41.235439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:25.196 08:49:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:26.590 08:49:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:26.590 00:09:26.590 real 0m1.420s 00:09:26.590 user 0m1.247s 00:09:26.590 sys 0m0.076s 00:09:26.590 08:49:42 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:26.590 08:49:42 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:26.590 ************************************ 00:09:26.590 END TEST accel_crc32c 00:09:26.590 ************************************ 00:09:26.590 08:49:42 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:26.590 08:49:42 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:26.590 08:49:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:26.590 08:49:42 accel -- common/autotest_common.sh@10 -- # set +x 00:09:26.590 ************************************ 00:09:26.590 START TEST accel_crc32c_C2 00:09:26.590 ************************************ 00:09:26.590 08:49:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:26.591 [2024-05-15 08:49:42.488426] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:26.591 [2024-05-15 08:49:42.488551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63142 ] 00:09:26.591 [2024-05-15 08:49:42.626434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.591 [2024-05-15 08:49:42.690833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.591 08:49:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:27.965 00:09:27.965 real 0m1.405s 00:09:27.965 user 0m1.229s 00:09:27.965 sys 0m0.080s 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:27.965 08:49:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:27.965 ************************************ 00:09:27.965 END TEST accel_crc32c_C2 00:09:27.965 ************************************ 00:09:27.965 08:49:43 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:27.965 08:49:43 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:27.965 08:49:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:27.965 08:49:43 accel -- common/autotest_common.sh@10 -- # set +x 00:09:27.965 ************************************ 00:09:27.965 START TEST accel_copy 00:09:27.965 ************************************ 00:09:27.965 08:49:43 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:27.965 08:49:43 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:09:27.965 [2024-05-15 08:49:43.931034] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:27.965 [2024-05-15 08:49:43.931129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63171 ] 00:09:27.965 [2024-05-15 08:49:44.059066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.965 [2024-05-15 08:49:44.124129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.965 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:27.966 08:49:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:29.343 ************************************ 00:09:29.343 END TEST accel_copy 00:09:29.343 ************************************ 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:09:29.343 08:49:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:29.343 00:09:29.343 real 0m1.398s 00:09:29.343 user 0m1.228s 00:09:29.343 sys 0m0.073s 00:09:29.343 08:49:45 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:29.343 08:49:45 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:09:29.343 08:49:45 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:29.343 08:49:45 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:29.343 08:49:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:29.343 08:49:45 accel -- common/autotest_common.sh@10 -- # set +x 00:09:29.343 ************************************ 00:09:29.343 START TEST accel_fill 00:09:29.343 ************************************ 00:09:29.343 08:49:45 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:09:29.343 08:49:45 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:09:29.343 [2024-05-15 08:49:45.373072] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:29.343 [2024-05-15 08:49:45.373183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63200 ] 00:09:29.343 [2024-05-15 08:49:45.513602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.343 [2024-05-15 08:49:45.574847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:29.603 08:49:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:09:30.538 08:49:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:30.538 00:09:30.538 real 0m1.411s 00:09:30.538 user 0m1.231s 00:09:30.538 sys 0m0.081s 00:09:30.538 08:49:46 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:30.538 08:49:46 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 ************************************ 00:09:30.538 END TEST accel_fill 00:09:30.538 ************************************ 00:09:30.797 08:49:46 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:30.797 08:49:46 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:30.797 08:49:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:30.797 08:49:46 accel -- common/autotest_common.sh@10 -- # set +x 00:09:30.797 ************************************ 00:09:30.797 START TEST accel_copy_crc32c 00:09:30.797 ************************************ 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:30.797 08:49:46 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:30.797 [2024-05-15 08:49:46.827171] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:30.797 [2024-05-15 08:49:46.827283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63240 ] 00:09:30.797 [2024-05-15 08:49:46.962891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.797 [2024-05-15 08:49:47.025065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.055 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.056 08:49:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:32.431 00:09:32.431 real 0m1.452s 00:09:32.431 user 0m1.261s 00:09:32.431 sys 0m0.091s 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:32.431 08:49:48 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:32.431 ************************************ 00:09:32.431 END TEST accel_copy_crc32c 00:09:32.431 ************************************ 00:09:32.431 08:49:48 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:32.431 08:49:48 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:32.432 08:49:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:32.432 08:49:48 accel -- common/autotest_common.sh@10 -- # set +x 00:09:32.432 ************************************ 00:09:32.432 START TEST accel_copy_crc32c_C2 00:09:32.432 ************************************ 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:32.432 [2024-05-15 08:49:48.332451] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:32.432 [2024-05-15 08:49:48.332658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63269 ] 00:09:32.432 [2024-05-15 08:49:48.471795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.432 [2024-05-15 08:49:48.561119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.432 08:49:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:33.805 00:09:33.805 real 0m1.466s 00:09:33.805 user 0m1.271s 00:09:33.805 sys 0m0.090s 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.805 08:49:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:33.805 ************************************ 00:09:33.805 END TEST accel_copy_crc32c_C2 00:09:33.805 ************************************ 00:09:33.805 08:49:49 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:33.805 08:49:49 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:33.805 08:49:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.805 08:49:49 accel -- common/autotest_common.sh@10 -- # set +x 00:09:33.805 ************************************ 00:09:33.805 START TEST accel_dualcast 00:09:33.805 ************************************ 00:09:33.805 08:49:49 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:09:33.805 08:49:49 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:09:33.805 [2024-05-15 08:49:49.838037] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:33.805 [2024-05-15 08:49:49.838169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63309 ] 00:09:33.805 [2024-05-15 08:49:49.980724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.064 [2024-05-15 08:49:50.073278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.064 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.065 08:49:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:09:35.438 08:49:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:35.438 00:09:35.438 real 0m1.459s 00:09:35.438 user 0m1.258s 00:09:35.438 sys 0m0.099s 00:09:35.438 08:49:51 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:35.438 ************************************ 00:09:35.438 END TEST accel_dualcast 00:09:35.438 ************************************ 00:09:35.438 08:49:51 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:09:35.438 08:49:51 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:35.438 08:49:51 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:35.438 08:49:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:35.438 08:49:51 accel -- common/autotest_common.sh@10 -- # set +x 00:09:35.438 ************************************ 00:09:35.438 START TEST accel_compare 00:09:35.438 ************************************ 00:09:35.438 08:49:51 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:09:35.438 [2024-05-15 08:49:51.342718] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:35.438 [2024-05-15 08:49:51.342873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63338 ] 00:09:35.438 [2024-05-15 08:49:51.481337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.438 [2024-05-15 08:49:51.571159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.438 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:35.439 08:49:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:09:36.811 08:49:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:36.811 00:09:36.811 real 0m1.478s 00:09:36.811 user 0m1.276s 00:09:36.811 sys 0m0.099s 00:09:36.811 08:49:52 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.811 ************************************ 00:09:36.811 END TEST accel_compare 00:09:36.811 08:49:52 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:09:36.811 ************************************ 00:09:36.811 08:49:52 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:36.811 08:49:52 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:36.811 08:49:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.811 08:49:52 accel -- common/autotest_common.sh@10 -- # set +x 00:09:36.811 ************************************ 00:09:36.811 START TEST accel_xor 00:09:36.811 ************************************ 00:09:36.811 08:49:52 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:36.811 08:49:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:36.811 [2024-05-15 08:49:52.862152] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:36.811 [2024-05-15 08:49:52.862253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63379 ] 00:09:36.811 [2024-05-15 08:49:52.999895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.072 [2024-05-15 08:49:53.088049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.073 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:37.074 08:49:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:38.449 00:09:38.449 real 0m1.453s 00:09:38.449 user 0m1.264s 00:09:38.449 sys 0m0.090s 00:09:38.449 08:49:54 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:38.449 ************************************ 00:09:38.449 END TEST accel_xor 00:09:38.449 ************************************ 00:09:38.449 08:49:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:38.449 08:49:54 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:38.449 08:49:54 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:38.449 08:49:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.449 08:49:54 accel -- common/autotest_common.sh@10 -- # set +x 00:09:38.449 ************************************ 00:09:38.449 START TEST accel_xor 00:09:38.449 ************************************ 00:09:38.449 08:49:54 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:38.449 [2024-05-15 08:49:54.356092] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:38.449 [2024-05-15 08:49:54.356228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63409 ] 00:09:38.449 [2024-05-15 08:49:54.495111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.449 [2024-05-15 08:49:54.582980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:38.449 08:49:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:39.819 08:49:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:39.819 00:09:39.819 real 0m1.449s 00:09:39.819 user 0m1.262s 00:09:39.819 sys 0m0.088s 00:09:39.819 08:49:55 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:39.819 08:49:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:39.819 ************************************ 00:09:39.819 END TEST accel_xor 00:09:39.819 ************************************ 00:09:39.819 08:49:55 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:39.819 08:49:55 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:39.819 08:49:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:39.819 08:49:55 accel -- common/autotest_common.sh@10 -- # set +x 00:09:39.819 ************************************ 00:09:39.819 START TEST accel_dif_verify 00:09:39.819 ************************************ 00:09:39.819 08:49:55 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:39.819 08:49:55 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:09:39.819 [2024-05-15 08:49:55.853094] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:39.819 [2024-05-15 08:49:55.853230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63444 ] 00:09:39.819 [2024-05-15 08:49:55.991802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.077 [2024-05-15 08:49:56.079768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:40.077 08:49:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:09:41.451 08:49:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:41.451 00:09:41.451 real 0m1.461s 00:09:41.451 user 0m1.269s 00:09:41.451 sys 0m0.092s 00:09:41.451 08:49:57 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:41.451 08:49:57 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:09:41.451 ************************************ 00:09:41.451 END TEST accel_dif_verify 00:09:41.451 ************************************ 00:09:41.451 08:49:57 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:41.451 08:49:57 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:41.451 08:49:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.451 08:49:57 accel -- common/autotest_common.sh@10 -- # set +x 00:09:41.451 ************************************ 00:09:41.451 START TEST accel_dif_generate 00:09:41.451 ************************************ 00:09:41.451 08:49:57 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:09:41.451 08:49:57 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:09:41.451 08:49:57 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:09:41.451 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:09:41.452 [2024-05-15 08:49:57.356185] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:41.452 [2024-05-15 08:49:57.356316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63478 ] 00:09:41.452 [2024-05-15 08:49:57.499062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.452 [2024-05-15 08:49:57.583085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.452 08:49:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:09:42.826 08:49:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:42.826 00:09:42.826 real 0m1.473s 00:09:42.826 user 0m1.280s 00:09:42.826 sys 0m0.089s 00:09:42.826 08:49:58 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:42.826 08:49:58 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:09:42.826 ************************************ 00:09:42.826 END TEST accel_dif_generate 00:09:42.826 ************************************ 00:09:42.826 08:49:58 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:42.826 08:49:58 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:42.826 08:49:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:42.826 08:49:58 accel -- common/autotest_common.sh@10 -- # set +x 00:09:42.826 ************************************ 00:09:42.826 START TEST accel_dif_generate_copy 00:09:42.826 ************************************ 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:42.826 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:42.827 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:42.827 08:49:58 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:09:42.827 [2024-05-15 08:49:58.874398] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:42.827 [2024-05-15 08:49:58.874525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63513 ] 00:09:42.827 [2024-05-15 08:49:59.011941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.086 [2024-05-15 08:49:59.093147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.086 08:49:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:44.463 00:09:44.463 real 0m1.423s 00:09:44.463 user 0m1.245s 00:09:44.463 sys 0m0.081s 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:44.463 ************************************ 00:09:44.463 END TEST accel_dif_generate_copy 00:09:44.463 ************************************ 00:09:44.463 08:50:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:09:44.463 08:50:00 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:09:44.463 08:50:00 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:44.463 08:50:00 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:09:44.463 08:50:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:44.463 08:50:00 accel -- common/autotest_common.sh@10 -- # set +x 00:09:44.463 ************************************ 00:09:44.463 START TEST accel_comp 00:09:44.463 ************************************ 00:09:44.463 08:50:00 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:09:44.463 [2024-05-15 08:50:00.342386] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:44.463 [2024-05-15 08:50:00.342542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63547 ] 00:09:44.463 [2024-05-15 08:50:00.480663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.463 [2024-05-15 08:50:00.560304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.463 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:44.464 08:50:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:09:45.881 08:50:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:45.881 00:09:45.881 real 0m1.441s 00:09:45.881 user 0m1.254s 00:09:45.881 sys 0m0.087s 00:09:45.881 08:50:01 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:45.881 ************************************ 00:09:45.881 END TEST accel_comp 00:09:45.881 ************************************ 00:09:45.881 08:50:01 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:09:45.881 08:50:01 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:45.881 08:50:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:45.882 08:50:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:45.882 08:50:01 accel -- common/autotest_common.sh@10 -- # set +x 00:09:45.882 ************************************ 00:09:45.882 START TEST accel_decomp 00:09:45.882 ************************************ 00:09:45.882 08:50:01 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:09:45.882 08:50:01 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:09:45.882 [2024-05-15 08:50:01.826969] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:45.882 [2024-05-15 08:50:01.827119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63582 ] 00:09:45.882 [2024-05-15 08:50:01.967217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.882 [2024-05-15 08:50:02.051996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:45.882 08:50:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:47.260 08:50:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:47.260 00:09:47.260 real 0m1.440s 00:09:47.260 user 0m0.019s 00:09:47.260 sys 0m0.001s 00:09:47.260 08:50:03 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:47.260 08:50:03 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:09:47.260 ************************************ 00:09:47.260 END TEST accel_decomp 00:09:47.260 ************************************ 00:09:47.260 08:50:03 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:47.260 08:50:03 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:09:47.260 08:50:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:47.260 08:50:03 accel -- common/autotest_common.sh@10 -- # set +x 00:09:47.260 ************************************ 00:09:47.260 START TEST accel_decmop_full 00:09:47.260 ************************************ 00:09:47.260 08:50:03 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:09:47.260 08:50:03 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:09:47.260 [2024-05-15 08:50:03.309167] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:47.260 [2024-05-15 08:50:03.309299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63611 ] 00:09:47.260 [2024-05-15 08:50:03.446475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.520 [2024-05-15 08:50:03.512199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.520 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:47.521 08:50:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:48.898 08:50:04 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:48.898 00:09:48.898 real 0m1.472s 00:09:48.898 user 0m1.283s 00:09:48.898 sys 0m0.089s 00:09:48.898 08:50:04 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:48.898 08:50:04 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:09:48.898 ************************************ 00:09:48.898 END TEST accel_decmop_full 00:09:48.898 ************************************ 00:09:48.898 08:50:04 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:48.898 08:50:04 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:09:48.898 08:50:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:48.898 08:50:04 accel -- common/autotest_common.sh@10 -- # set +x 00:09:48.898 ************************************ 00:09:48.898 START TEST accel_decomp_mcore 00:09:48.898 ************************************ 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:48.898 08:50:04 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:48.898 [2024-05-15 08:50:04.825297] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:48.898 [2024-05-15 08:50:04.825437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63651 ] 00:09:48.898 [2024-05-15 08:50:04.965623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.898 [2024-05-15 08:50:05.039472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.898 [2024-05-15 08:50:05.039580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.898 [2024-05-15 08:50:05.039648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.898 [2024-05-15 08:50:05.039655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.898 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:48.899 08:50:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:50.273 00:09:50.273 real 0m1.444s 00:09:50.273 user 0m4.477s 00:09:50.273 sys 0m0.104s 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:50.273 08:50:06 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:50.273 ************************************ 00:09:50.273 END TEST accel_decomp_mcore 00:09:50.273 ************************************ 00:09:50.273 08:50:06 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:50.273 08:50:06 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:50.273 08:50:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:50.273 08:50:06 accel -- common/autotest_common.sh@10 -- # set +x 00:09:50.273 ************************************ 00:09:50.273 START TEST accel_decomp_full_mcore 00:09:50.273 ************************************ 00:09:50.273 08:50:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:50.273 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:50.273 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:50.273 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.273 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:50.273 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.274 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:50.274 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:50.274 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:50.274 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:50.274 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:50.274 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:50.274 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:50.274 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:50.274 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:50.274 [2024-05-15 08:50:06.309689] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:50.274 [2024-05-15 08:50:06.309826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63683 ] 00:09:50.274 [2024-05-15 08:50:06.450075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.532 [2024-05-15 08:50:06.540326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.532 [2024-05-15 08:50:06.540429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.532 [2024-05-15 08:50:06.540503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.532 [2024-05-15 08:50:06.540509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.532 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:50.533 08:50:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:51.907 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:51.907 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:51.907 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:51.907 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:51.907 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:51.907 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:51.907 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:51.907 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:51.907 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:51.907 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:51.907 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:51.908 00:09:51.908 real 0m1.474s 00:09:51.908 user 0m4.537s 00:09:51.908 sys 0m0.106s 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:51.908 08:50:07 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:51.908 ************************************ 00:09:51.908 END TEST accel_decomp_full_mcore 00:09:51.908 ************************************ 00:09:51.908 08:50:07 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:51.908 08:50:07 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:09:51.908 08:50:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:51.908 08:50:07 accel -- common/autotest_common.sh@10 -- # set +x 00:09:51.908 ************************************ 00:09:51.908 START TEST accel_decomp_mthread 00:09:51.908 ************************************ 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:51.908 08:50:07 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:51.908 [2024-05-15 08:50:07.824099] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:51.908 [2024-05-15 08:50:07.824192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63726 ] 00:09:51.908 [2024-05-15 08:50:07.959545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.908 [2024-05-15 08:50:08.018557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.908 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:51.909 08:50:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.285 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:53.285 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.285 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.285 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.285 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:53.285 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.285 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.285 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.285 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:53.285 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.285 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:53.286 00:09:53.286 real 0m1.405s 00:09:53.286 user 0m1.229s 00:09:53.286 sys 0m0.084s 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:53.286 08:50:09 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:53.286 ************************************ 00:09:53.286 END TEST accel_decomp_mthread 00:09:53.286 ************************************ 00:09:53.286 08:50:09 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:53.286 08:50:09 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:53.286 08:50:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:53.286 08:50:09 accel -- common/autotest_common.sh@10 -- # set +x 00:09:53.286 ************************************ 00:09:53.286 START TEST accel_decomp_full_mthread 00:09:53.286 ************************************ 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:53.286 [2024-05-15 08:50:09.266847] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:53.286 [2024-05-15 08:50:09.266942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63755 ] 00:09:53.286 [2024-05-15 08:50:09.398454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.286 [2024-05-15 08:50:09.459937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.286 08:50:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:54.661 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:54.661 ************************************ 00:09:54.662 END TEST accel_decomp_full_mthread 00:09:54.662 ************************************ 00:09:54.662 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:54.662 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:54.662 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:54.662 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:54.662 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:54.662 08:50:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:54.662 00:09:54.662 real 0m1.434s 00:09:54.662 user 0m1.267s 00:09:54.662 sys 0m0.071s 00:09:54.662 08:50:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:54.662 08:50:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:54.662 08:50:10 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:54.662 08:50:10 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:54.662 08:50:10 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:54.662 08:50:10 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:54.662 08:50:10 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:54.662 08:50:10 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:54.662 08:50:10 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.662 08:50:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:54.662 08:50:10 accel -- common/autotest_common.sh@10 -- # set +x 00:09:54.662 08:50:10 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.662 08:50:10 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:54.662 08:50:10 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:54.662 08:50:10 accel -- accel/accel.sh@41 -- # jq -r . 00:09:54.662 ************************************ 00:09:54.662 START TEST accel_dif_functional_tests 00:09:54.662 ************************************ 00:09:54.662 08:50:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:54.662 [2024-05-15 08:50:10.789881] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:54.662 [2024-05-15 08:50:10.790013] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63785 ] 00:09:54.921 [2024-05-15 08:50:10.929488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:54.921 [2024-05-15 08:50:11.019197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.921 [2024-05-15 08:50:11.019295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.921 [2024-05-15 08:50:11.019319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.921 00:09:54.921 00:09:54.921 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.921 http://cunit.sourceforge.net/ 00:09:54.921 00:09:54.921 00:09:54.921 Suite: accel_dif 00:09:54.921 Test: verify: DIF generated, GUARD check ...passed 00:09:54.921 Test: verify: DIF generated, APPTAG check ...passed 00:09:54.921 Test: verify: DIF generated, REFTAG check ...passed 00:09:54.921 Test: verify: DIF not generated, GUARD check ...[2024-05-15 08:50:11.083471] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:54.921 [2024-05-15 08:50:11.083762] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:54.921 passed 00:09:54.921 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 08:50:11.084152] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:54.921 [2024-05-15 08:50:11.084373] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:54.921 passed 00:09:54.921 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 08:50:11.084840] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:54.921 [2024-05-15 08:50:11.085142] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:54.921 passed 00:09:54.921 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:54.921 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 08:50:11.085784] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:54.921 passed 00:09:54.921 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:54.921 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:54.921 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:54.921 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 08:50:11.086207] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:54.921 passed 00:09:54.921 Test: generate copy: DIF generated, GUARD check ...passed 00:09:54.921 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:54.921 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:54.921 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:54.921 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:54.921 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:54.921 Test: generate copy: iovecs-len validate ...passed 00:09:54.921 Test: generate copy: buffer alignment validate ...passed 00:09:54.921 00:09:54.921 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.921 suites 1 1 n/a 0 0 00:09:54.921 tests 20 20 20 0 0 00:09:54.921 asserts 204 204 204 0 n/a 00:09:54.921 00:09:54.921 Elapsed time = 0.010 seconds 00:09:54.921 [2024-05-15 08:50:11.086946] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:55.179 00:09:55.179 real 0m0.550s 00:09:55.179 user 0m0.623s 00:09:55.179 sys 0m0.121s 00:09:55.179 08:50:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:55.179 ************************************ 00:09:55.179 END TEST accel_dif_functional_tests 00:09:55.179 ************************************ 00:09:55.179 08:50:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:55.179 00:09:55.179 real 0m33.157s 00:09:55.179 user 0m35.324s 00:09:55.179 sys 0m3.098s 00:09:55.179 08:50:11 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:55.179 08:50:11 accel -- common/autotest_common.sh@10 -- # set +x 00:09:55.179 ************************************ 00:09:55.179 END TEST accel 00:09:55.179 ************************************ 00:09:55.179 08:50:11 -- spdk/autotest.sh@193 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:55.179 08:50:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:55.179 08:50:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:55.179 08:50:11 -- common/autotest_common.sh@10 -- # set +x 00:09:55.179 ************************************ 00:09:55.179 START TEST accel_rpc 00:09:55.179 ************************************ 00:09:55.179 08:50:11 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:55.438 * Looking for test storage... 00:09:55.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:55.438 08:50:11 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:55.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.438 08:50:11 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63855 00:09:55.438 08:50:11 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 63855 00:09:55.438 08:50:11 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:55.438 08:50:11 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 63855 ']' 00:09:55.438 08:50:11 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.438 08:50:11 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:55.438 08:50:11 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.438 08:50:11 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:55.438 08:50:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 [2024-05-15 08:50:11.516365] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:55.438 [2024-05-15 08:50:11.517110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63855 ] 00:09:55.438 [2024-05-15 08:50:11.657308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.696 [2024-05-15 08:50:11.728352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:56.630 08:50:12 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:56.630 08:50:12 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:56.630 08:50:12 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:56.630 08:50:12 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:56.630 08:50:12 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.630 ************************************ 00:09:56.630 START TEST accel_assign_opcode 00:09:56.630 ************************************ 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:56.630 [2024-05-15 08:50:12.517051] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:56.630 [2024-05-15 08:50:12.529082] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.630 software 00:09:56.630 ************************************ 00:09:56.630 END TEST accel_assign_opcode 00:09:56.630 ************************************ 00:09:56.630 00:09:56.630 real 0m0.207s 00:09:56.630 user 0m0.046s 00:09:56.630 sys 0m0.008s 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:56.630 08:50:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:56.630 08:50:12 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 63855 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 63855 ']' 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 63855 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63855 00:09:56.630 killing process with pid 63855 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63855' 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@965 -- # kill 63855 00:09:56.630 08:50:12 accel_rpc -- common/autotest_common.sh@970 -- # wait 63855 00:09:56.960 ************************************ 00:09:56.960 END TEST accel_rpc 00:09:56.960 ************************************ 00:09:56.960 00:09:56.960 real 0m1.715s 00:09:56.960 user 0m1.923s 00:09:56.960 sys 0m0.344s 00:09:56.960 08:50:13 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:56.960 08:50:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.960 08:50:13 -- spdk/autotest.sh@194 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:56.960 08:50:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:56.960 08:50:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:56.960 08:50:13 -- common/autotest_common.sh@10 -- # set +x 00:09:56.960 ************************************ 00:09:56.960 START TEST app_cmdline 00:09:56.960 ************************************ 00:09:56.960 08:50:13 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:57.217 * Looking for test storage... 00:09:57.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:57.217 08:50:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:57.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.217 08:50:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63966 00:09:57.217 08:50:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:57.217 08:50:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63966 00:09:57.217 08:50:13 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 63966 ']' 00:09:57.217 08:50:13 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.217 08:50:13 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:57.217 08:50:13 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.217 08:50:13 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:57.217 08:50:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:57.217 [2024-05-15 08:50:13.252172] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:57.217 [2024-05-15 08:50:13.252540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63966 ] 00:09:57.217 [2024-05-15 08:50:13.389460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.475 [2024-05-15 08:50:13.454344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.475 08:50:13 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:57.475 08:50:13 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:09:57.475 08:50:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:57.734 { 00:09:57.734 "fields": { 00:09:57.734 "commit": "08ee631f2", 00:09:57.734 "major": 24, 00:09:57.734 "minor": 5, 00:09:57.734 "patch": 0, 00:09:57.734 "suffix": "-pre" 00:09:57.734 }, 00:09:57.734 "version": "SPDK v24.05-pre git sha1 08ee631f2" 00:09:57.734 } 00:09:57.734 08:50:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:57.734 08:50:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:57.734 08:50:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:57.734 08:50:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:57.734 08:50:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:57.734 08:50:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:57.734 08:50:13 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.734 08:50:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:57.734 08:50:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:57.734 08:50:13 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.991 08:50:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:57.991 08:50:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:57.991 08:50:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:57.991 08:50:13 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:57.991 08:50:13 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:57.991 08:50:13 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.991 08:50:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:57.991 08:50:13 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.991 08:50:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:57.992 08:50:13 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.992 08:50:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:57.992 08:50:13 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.992 08:50:13 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:57.992 08:50:13 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:58.250 2024/05/15 08:50:14 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:09:58.250 request: 00:09:58.250 { 00:09:58.250 "method": "env_dpdk_get_mem_stats", 00:09:58.250 "params": {} 00:09:58.250 } 00:09:58.250 Got JSON-RPC error response 00:09:58.250 GoRPCClient: error on JSON-RPC call 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:58.250 08:50:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63966 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 63966 ']' 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 63966 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63966 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:58.250 killing process with pid 63966 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63966' 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@965 -- # kill 63966 00:09:58.250 08:50:14 app_cmdline -- common/autotest_common.sh@970 -- # wait 63966 00:09:58.508 00:09:58.508 real 0m1.528s 00:09:58.508 user 0m2.066s 00:09:58.508 sys 0m0.386s 00:09:58.508 08:50:14 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:58.508 08:50:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:58.508 ************************************ 00:09:58.508 END TEST app_cmdline 00:09:58.508 ************************************ 00:09:58.508 08:50:14 -- spdk/autotest.sh@195 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:58.508 08:50:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:58.508 08:50:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:58.508 08:50:14 -- common/autotest_common.sh@10 -- # set +x 00:09:58.508 ************************************ 00:09:58.508 START TEST version 00:09:58.508 ************************************ 00:09:58.508 08:50:14 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:58.767 * Looking for test storage... 00:09:58.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:58.767 08:50:14 version -- app/version.sh@17 -- # get_header_version major 00:09:58.767 08:50:14 version -- app/version.sh@14 -- # cut -f2 00:09:58.767 08:50:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:58.767 08:50:14 version -- app/version.sh@14 -- # tr -d '"' 00:09:58.767 08:50:14 version -- app/version.sh@17 -- # major=24 00:09:58.767 08:50:14 version -- app/version.sh@18 -- # get_header_version minor 00:09:58.767 08:50:14 version -- app/version.sh@14 -- # cut -f2 00:09:58.767 08:50:14 version -- app/version.sh@14 -- # tr -d '"' 00:09:58.767 08:50:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:58.767 08:50:14 version -- app/version.sh@18 -- # minor=5 00:09:58.767 08:50:14 version -- app/version.sh@19 -- # get_header_version patch 00:09:58.767 08:50:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:58.767 08:50:14 version -- app/version.sh@14 -- # cut -f2 00:09:58.767 08:50:14 version -- app/version.sh@14 -- # tr -d '"' 00:09:58.767 08:50:14 version -- app/version.sh@19 -- # patch=0 00:09:58.767 08:50:14 version -- app/version.sh@20 -- # get_header_version suffix 00:09:58.767 08:50:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:58.767 08:50:14 version -- app/version.sh@14 -- # cut -f2 00:09:58.767 08:50:14 version -- app/version.sh@14 -- # tr -d '"' 00:09:58.767 08:50:14 version -- app/version.sh@20 -- # suffix=-pre 00:09:58.767 08:50:14 version -- app/version.sh@22 -- # version=24.5 00:09:58.767 08:50:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:58.767 08:50:14 version -- app/version.sh@28 -- # version=24.5rc0 00:09:58.767 08:50:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:58.767 08:50:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:58.767 08:50:14 version -- app/version.sh@30 -- # py_version=24.5rc0 00:09:58.767 08:50:14 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:09:58.767 00:09:58.767 real 0m0.137s 00:09:58.767 user 0m0.082s 00:09:58.767 sys 0m0.082s 00:09:58.767 08:50:14 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:58.767 08:50:14 version -- common/autotest_common.sh@10 -- # set +x 00:09:58.767 ************************************ 00:09:58.767 END TEST version 00:09:58.767 ************************************ 00:09:58.767 08:50:14 -- spdk/autotest.sh@197 -- # '[' 0 -eq 1 ']' 00:09:58.767 08:50:14 -- spdk/autotest.sh@207 -- # uname -s 00:09:58.767 08:50:14 -- spdk/autotest.sh@207 -- # [[ Linux == Linux ]] 00:09:58.767 08:50:14 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:09:58.767 08:50:14 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:09:58.767 08:50:14 -- spdk/autotest.sh@220 -- # '[' 0 -eq 1 ']' 00:09:58.767 08:50:14 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:09:58.767 08:50:14 -- spdk/autotest.sh@269 -- # timing_exit lib 00:09:58.767 08:50:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.767 08:50:14 -- common/autotest_common.sh@10 -- # set +x 00:09:58.767 08:50:14 -- spdk/autotest.sh@271 -- # '[' 0 -eq 1 ']' 00:09:58.767 08:50:14 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:09:58.767 08:50:14 -- spdk/autotest.sh@288 -- # '[' 1 -eq 1 ']' 00:09:58.767 08:50:14 -- spdk/autotest.sh@289 -- # export NET_TYPE 00:09:58.767 08:50:14 -- spdk/autotest.sh@292 -- # '[' tcp = rdma ']' 00:09:58.767 08:50:14 -- spdk/autotest.sh@295 -- # '[' tcp = tcp ']' 00:09:58.767 08:50:14 -- spdk/autotest.sh@296 -- # run_test_wrapper nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:58.767 08:50:14 -- spdk/autotest.sh@10 -- # local test_name=nvmf_tcp 00:09:58.767 08:50:14 -- spdk/autotest.sh@11 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:58.767 08:50:14 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:58.767 08:50:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:58.767 08:50:14 -- common/autotest_common.sh@10 -- # set +x 00:09:58.767 ************************************ 00:09:58.767 START TEST nvmf_tcp 00:09:58.767 ************************************ 00:09:58.767 08:50:14 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:58.767 * Looking for test storage... 00:09:58.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.767 08:50:14 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.767 08:50:14 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.767 08:50:14 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.767 08:50:14 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.767 08:50:14 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.767 08:50:14 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.767 08:50:14 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:09:58.767 08:50:14 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:58.767 08:50:14 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:58.767 08:50:14 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:58.767 08:50:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:58.768 08:50:14 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:09:58.768 08:50:14 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:58.768 08:50:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:58.768 08:50:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:58.768 08:50:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.027 ************************************ 00:09:59.027 START TEST nvmf_example 00:09:59.027 ************************************ 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:59.027 * Looking for test storage... 00:09:59.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:59.027 Cannot find device "nvmf_init_br" 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:59.027 Cannot find device "nvmf_tgt_br" 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.027 Cannot find device "nvmf_tgt_br2" 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:59.027 Cannot find device "nvmf_init_br" 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:59.027 Cannot find device "nvmf_tgt_br" 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:59.027 Cannot find device "nvmf_tgt_br2" 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:59.027 Cannot find device "nvmf_br" 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:59.027 Cannot find device "nvmf_init_if" 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:59.027 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:59.028 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:59.028 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:59.028 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:59.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:09:59.286 00:09:59.286 --- 10.0.0.2 ping statistics --- 00:09:59.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.286 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:09:59.286 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:59.286 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:59.286 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:09:59.286 00:09:59.286 --- 10.0.0.3 ping statistics --- 00:09:59.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.287 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:59.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:59.287 00:09:59.287 --- 10.0.0.1 ping statistics --- 00:09:59.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.287 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64296 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64296 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 64296 ']' 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:59.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:59.287 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:09:59.854 08:50:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:12.056 Initializing NVMe Controllers 00:10:12.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:12.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:12.056 Initialization complete. Launching workers. 00:10:12.056 ======================================================== 00:10:12.056 Latency(us) 00:10:12.056 Device Information : IOPS MiB/s Average min max 00:10:12.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14519.30 56.72 4407.24 668.44 20251.63 00:10:12.056 ======================================================== 00:10:12.056 Total : 14519.30 56.72 4407.24 668.44 20251.63 00:10:12.056 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:12.056 rmmod nvme_tcp 00:10:12.056 rmmod nvme_fabrics 00:10:12.056 rmmod nvme_keyring 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64296 ']' 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64296 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 64296 ']' 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 64296 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64296 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:10:12.056 killing process with pid 64296 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64296' 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 64296 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 64296 00:10:12.056 nvmf threads initialize successfully 00:10:12.056 bdev subsystem init successfully 00:10:12.056 created a nvmf target service 00:10:12.056 create targets's poll groups done 00:10:12.056 all subsystems of target started 00:10:12.056 nvmf target is running 00:10:12.056 all subsystems of target stopped 00:10:12.056 destroy targets's poll groups done 00:10:12.056 destroyed the nvmf target service 00:10:12.056 bdev subsystem finish successfully 00:10:12.056 nvmf threads destroy successfully 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.056 00:10:12.056 real 0m11.489s 00:10:12.056 user 0m40.963s 00:10:12.056 sys 0m1.844s 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:12.056 08:50:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.056 ************************************ 00:10:12.056 END TEST nvmf_example 00:10:12.056 ************************************ 00:10:12.056 08:50:26 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:12.056 08:50:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:12.056 08:50:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.056 08:50:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.056 ************************************ 00:10:12.056 START TEST nvmf_filesystem 00:10:12.056 ************************************ 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:12.056 * Looking for test storage... 00:10:12.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:12.056 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:10:12.057 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:12.057 #define SPDK_CONFIG_H 00:10:12.057 #define SPDK_CONFIG_APPS 1 00:10:12.057 #define SPDK_CONFIG_ARCH native 00:10:12.057 #undef SPDK_CONFIG_ASAN 00:10:12.057 #define SPDK_CONFIG_AVAHI 1 00:10:12.057 #undef SPDK_CONFIG_CET 00:10:12.057 #define SPDK_CONFIG_COVERAGE 1 00:10:12.057 #define SPDK_CONFIG_CROSS_PREFIX 00:10:12.057 #undef SPDK_CONFIG_CRYPTO 00:10:12.057 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:12.057 #undef SPDK_CONFIG_CUSTOMOCF 00:10:12.057 #undef SPDK_CONFIG_DAOS 00:10:12.057 #define SPDK_CONFIG_DAOS_DIR 00:10:12.057 #define SPDK_CONFIG_DEBUG 1 00:10:12.057 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:12.057 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:12.057 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:12.057 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:12.057 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:12.057 #undef SPDK_CONFIG_DPDK_UADK 00:10:12.057 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:12.057 #define SPDK_CONFIG_EXAMPLES 1 00:10:12.057 #undef SPDK_CONFIG_FC 00:10:12.057 #define SPDK_CONFIG_FC_PATH 00:10:12.057 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:12.057 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:12.057 #undef SPDK_CONFIG_FUSE 00:10:12.057 #undef SPDK_CONFIG_FUZZER 00:10:12.057 #define SPDK_CONFIG_FUZZER_LIB 00:10:12.057 #define SPDK_CONFIG_GOLANG 1 00:10:12.057 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:12.057 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:12.057 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:12.057 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:10:12.057 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:12.057 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:12.057 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:12.057 #define SPDK_CONFIG_IDXD 1 00:10:12.057 #undef SPDK_CONFIG_IDXD_KERNEL 00:10:12.057 #undef SPDK_CONFIG_IPSEC_MB 00:10:12.057 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:12.057 #define SPDK_CONFIG_ISAL 1 00:10:12.057 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:12.057 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:12.057 #define SPDK_CONFIG_LIBDIR 00:10:12.057 #undef SPDK_CONFIG_LTO 00:10:12.057 #define SPDK_CONFIG_MAX_LCORES 00:10:12.057 #define SPDK_CONFIG_NVME_CUSE 1 00:10:12.057 #undef SPDK_CONFIG_OCF 00:10:12.058 #define SPDK_CONFIG_OCF_PATH 00:10:12.058 #define SPDK_CONFIG_OPENSSL_PATH 00:10:12.058 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:12.058 #define SPDK_CONFIG_PGO_DIR 00:10:12.058 #undef SPDK_CONFIG_PGO_USE 00:10:12.058 #define SPDK_CONFIG_PREFIX /usr/local 00:10:12.058 #undef SPDK_CONFIG_RAID5F 00:10:12.058 #undef SPDK_CONFIG_RBD 00:10:12.058 #define SPDK_CONFIG_RDMA 1 00:10:12.058 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:12.058 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:12.058 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:12.058 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:12.058 #define SPDK_CONFIG_SHARED 1 00:10:12.058 #undef SPDK_CONFIG_SMA 00:10:12.058 #define SPDK_CONFIG_TESTS 1 00:10:12.058 #undef SPDK_CONFIG_TSAN 00:10:12.058 #define SPDK_CONFIG_UBLK 1 00:10:12.058 #define SPDK_CONFIG_UBSAN 1 00:10:12.058 #undef SPDK_CONFIG_UNIT_TESTS 00:10:12.058 #undef SPDK_CONFIG_URING 00:10:12.058 #define SPDK_CONFIG_URING_PATH 00:10:12.058 #undef SPDK_CONFIG_URING_ZNS 00:10:12.058 #define SPDK_CONFIG_USDT 1 00:10:12.058 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:12.058 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:12.058 #undef SPDK_CONFIG_VFIO_USER 00:10:12.058 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:12.058 #define SPDK_CONFIG_VHOST 1 00:10:12.058 #define SPDK_CONFIG_VIRTIO 1 00:10:12.058 #undef SPDK_CONFIG_VTUNE 00:10:12.058 #define SPDK_CONFIG_VTUNE_DIR 00:10:12.058 #define SPDK_CONFIG_WERROR 1 00:10:12.058 #define SPDK_CONFIG_WPDK_DIR 00:10:12.058 #undef SPDK_CONFIG_XNVME 00:10:12.058 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:10:12.058 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 1 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 1 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 1 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:12.059 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 64528 ]] 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 64528 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.x32W0J 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.x32W0J/tests/target /tmp/spdk.x32W0J 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=4194304 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=4194304 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6264512512 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267887616 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=2494353408 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=2507157504 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12804096 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13815525376 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5208772608 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda2 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=843546624 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1012768768 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=100016128 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13815525376 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5208772608 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda3 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=92499968 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=104607744 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12107776 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6267756544 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267891712 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=135168 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1253572608 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253576704 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=94540808192 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5161971712 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:10:12.060 * Looking for test storage... 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:10:12.060 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/home 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=13815525376 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == tmpfs ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == ramfs ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ /home == / ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.061 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:12.062 Cannot find device "nvmf_tgt_br" 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.062 Cannot find device "nvmf_tgt_br2" 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:12.062 Cannot find device "nvmf_tgt_br" 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:12.062 Cannot find device "nvmf_tgt_br2" 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:12.062 08:50:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:12.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:10:12.062 00:10:12.062 --- 10.0.0.2 ping statistics --- 00:10:12.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.062 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:12.062 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.062 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:10:12.062 00:10:12.062 --- 10.0.0.3 ping statistics --- 00:10:12.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.062 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:12.062 00:10:12.062 --- 10.0.0.1 ping statistics --- 00:10:12.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.062 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.062 ************************************ 00:10:12.062 START TEST nvmf_filesystem_no_in_capsule 00:10:12.062 ************************************ 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=64697 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 64697 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 64697 ']' 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:12.062 08:50:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.062 [2024-05-15 08:50:27.207796] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:10:12.062 [2024-05-15 08:50:27.207954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.062 [2024-05-15 08:50:27.351223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.062 [2024-05-15 08:50:27.429108] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.062 [2024-05-15 08:50:27.429169] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.062 [2024-05-15 08:50:27.429184] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.062 [2024-05-15 08:50:27.429194] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.062 [2024-05-15 08:50:27.429204] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.062 [2024-05-15 08:50:27.429332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.062 [2024-05-15 08:50:27.430296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.062 [2024-05-15 08:50:27.430426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.062 [2024-05-15 08:50:27.430433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.062 [2024-05-15 08:50:28.238454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.062 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.324 Malloc1 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.324 [2024-05-15 08:50:28.366276] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:12.324 [2024-05-15 08:50:28.366811] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:10:12.324 { 00:10:12.324 "aliases": [ 00:10:12.324 "e058dc8d-79b3-4446-b383-d88200595153" 00:10:12.324 ], 00:10:12.324 "assigned_rate_limits": { 00:10:12.324 "r_mbytes_per_sec": 0, 00:10:12.324 "rw_ios_per_sec": 0, 00:10:12.324 "rw_mbytes_per_sec": 0, 00:10:12.324 "w_mbytes_per_sec": 0 00:10:12.324 }, 00:10:12.324 "block_size": 512, 00:10:12.324 "claim_type": "exclusive_write", 00:10:12.324 "claimed": true, 00:10:12.324 "driver_specific": {}, 00:10:12.324 "memory_domains": [ 00:10:12.324 { 00:10:12.324 "dma_device_id": "system", 00:10:12.324 "dma_device_type": 1 00:10:12.324 }, 00:10:12.324 { 00:10:12.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.324 "dma_device_type": 2 00:10:12.324 } 00:10:12.324 ], 00:10:12.324 "name": "Malloc1", 00:10:12.324 "num_blocks": 1048576, 00:10:12.324 "product_name": "Malloc disk", 00:10:12.324 "supported_io_types": { 00:10:12.324 "abort": true, 00:10:12.324 "compare": false, 00:10:12.324 "compare_and_write": false, 00:10:12.324 "flush": true, 00:10:12.324 "nvme_admin": false, 00:10:12.324 "nvme_io": false, 00:10:12.324 "read": true, 00:10:12.324 "reset": true, 00:10:12.324 "unmap": true, 00:10:12.324 "write": true, 00:10:12.324 "write_zeroes": true 00:10:12.324 }, 00:10:12.324 "uuid": "e058dc8d-79b3-4446-b383-d88200595153", 00:10:12.324 "zoned": false 00:10:12.324 } 00:10:12.324 ]' 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:12.324 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.583 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.583 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:10:12.583 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.583 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:12.583 08:50:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:14.481 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:14.740 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:14.740 08:50:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.697 ************************************ 00:10:15.697 START TEST filesystem_ext4 00:10:15.697 ************************************ 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:10:15.697 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:15.697 mke2fs 1.46.5 (30-Dec-2021) 00:10:15.697 Discarding device blocks: 0/522240 done 00:10:15.697 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:15.697 Filesystem UUID: 7785082c-27c8-473f-ba36-a21cb29543a8 00:10:15.697 Superblock backups stored on blocks: 00:10:15.697 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:15.697 00:10:15.697 Allocating group tables: 0/64 done 00:10:15.697 Writing inode tables: 0/64 done 00:10:15.955 Creating journal (8192 blocks): done 00:10:15.955 Writing superblocks and filesystem accounting information: 0/64 done 00:10:15.955 00:10:15.955 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:10:15.955 08:50:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 64697 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:15.955 00:10:15.955 real 0m0.353s 00:10:15.955 user 0m0.020s 00:10:15.955 sys 0m0.047s 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:15.955 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:15.955 ************************************ 00:10:15.955 END TEST filesystem_ext4 00:10:15.955 ************************************ 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.214 ************************************ 00:10:16.214 START TEST filesystem_btrfs 00:10:16.214 ************************************ 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:16.214 btrfs-progs v6.6.2 00:10:16.214 See https://btrfs.readthedocs.io for more information. 00:10:16.214 00:10:16.214 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:16.214 NOTE: several default settings have changed in version 5.15, please make sure 00:10:16.214 this does not affect your deployments: 00:10:16.214 - DUP for metadata (-m dup) 00:10:16.214 - enabled no-holes (-O no-holes) 00:10:16.214 - enabled free-space-tree (-R free-space-tree) 00:10:16.214 00:10:16.214 Label: (null) 00:10:16.214 UUID: 05f6eae6-0526-42f2-81d6-87ff8bfbdef8 00:10:16.214 Node size: 16384 00:10:16.214 Sector size: 4096 00:10:16.214 Filesystem size: 510.00MiB 00:10:16.214 Block group profiles: 00:10:16.214 Data: single 8.00MiB 00:10:16.214 Metadata: DUP 32.00MiB 00:10:16.214 System: DUP 8.00MiB 00:10:16.214 SSD detected: yes 00:10:16.214 Zoned device: no 00:10:16.214 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:16.214 Runtime features: free-space-tree 00:10:16.214 Checksum: crc32c 00:10:16.214 Number of devices: 1 00:10:16.214 Devices: 00:10:16.214 ID SIZE PATH 00:10:16.214 1 510.00MiB /dev/nvme0n1p1 00:10:16.214 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:16.214 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 64697 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:16.215 00:10:16.215 real 0m0.175s 00:10:16.215 user 0m0.017s 00:10:16.215 sys 0m0.057s 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:16.215 ************************************ 00:10:16.215 END TEST filesystem_btrfs 00:10:16.215 ************************************ 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.215 ************************************ 00:10:16.215 START TEST filesystem_xfs 00:10:16.215 ************************************ 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:10:16.215 08:50:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:16.474 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:16.474 = sectsz=512 attr=2, projid32bit=1 00:10:16.474 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:16.474 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:16.474 data = bsize=4096 blocks=130560, imaxpct=25 00:10:16.474 = sunit=0 swidth=0 blks 00:10:16.474 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:16.474 log =internal log bsize=4096 blocks=16384, version=2 00:10:16.474 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:16.474 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:17.040 Discarding blocks...Done. 00:10:17.040 08:50:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:10:17.040 08:50:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 64697 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:19.569 00:10:19.569 real 0m3.322s 00:10:19.569 user 0m0.024s 00:10:19.569 sys 0m0.044s 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:19.569 ************************************ 00:10:19.569 END TEST filesystem_xfs 00:10:19.569 ************************************ 00:10:19.569 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:19.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 64697 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 64697 ']' 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 64697 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64697 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:19.828 killing process with pid 64697 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64697' 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 64697 00:10:19.828 [2024-05-15 08:50:35.910789] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:19.828 08:50:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 64697 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:20.087 00:10:20.087 real 0m9.075s 00:10:20.087 user 0m34.127s 00:10:20.087 sys 0m1.532s 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.087 ************************************ 00:10:20.087 END TEST nvmf_filesystem_no_in_capsule 00:10:20.087 ************************************ 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:20.087 ************************************ 00:10:20.087 START TEST nvmf_filesystem_in_capsule 00:10:20.087 ************************************ 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65003 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65003 00:10:20.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 65003 ']' 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:20.087 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.087 [2024-05-15 08:50:36.297836] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:10:20.087 [2024-05-15 08:50:36.298349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.346 [2024-05-15 08:50:36.433415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.346 [2024-05-15 08:50:36.493456] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.346 [2024-05-15 08:50:36.493527] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.346 [2024-05-15 08:50:36.493546] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.346 [2024-05-15 08:50:36.493574] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.347 [2024-05-15 08:50:36.493591] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.347 [2024-05-15 08:50:36.493684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.347 [2024-05-15 08:50:36.493852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.347 [2024-05-15 08:50:36.494371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.347 [2024-05-15 08:50:36.494394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.347 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:20.347 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:10:20.347 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:20.347 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.347 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.606 [2024-05-15 08:50:36.610806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.606 Malloc1 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.606 [2024-05-15 08:50:36.735014] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:20.606 [2024-05-15 08:50:36.735286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:10:20.606 { 00:10:20.606 "aliases": [ 00:10:20.606 "b7323a76-b679-4e1d-89e0-456c6b4ed4fb" 00:10:20.606 ], 00:10:20.606 "assigned_rate_limits": { 00:10:20.606 "r_mbytes_per_sec": 0, 00:10:20.606 "rw_ios_per_sec": 0, 00:10:20.606 "rw_mbytes_per_sec": 0, 00:10:20.606 "w_mbytes_per_sec": 0 00:10:20.606 }, 00:10:20.606 "block_size": 512, 00:10:20.606 "claim_type": "exclusive_write", 00:10:20.606 "claimed": true, 00:10:20.606 "driver_specific": {}, 00:10:20.606 "memory_domains": [ 00:10:20.606 { 00:10:20.606 "dma_device_id": "system", 00:10:20.606 "dma_device_type": 1 00:10:20.606 }, 00:10:20.606 { 00:10:20.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.606 "dma_device_type": 2 00:10:20.606 } 00:10:20.606 ], 00:10:20.606 "name": "Malloc1", 00:10:20.606 "num_blocks": 1048576, 00:10:20.606 "product_name": "Malloc disk", 00:10:20.606 "supported_io_types": { 00:10:20.606 "abort": true, 00:10:20.606 "compare": false, 00:10:20.606 "compare_and_write": false, 00:10:20.606 "flush": true, 00:10:20.606 "nvme_admin": false, 00:10:20.606 "nvme_io": false, 00:10:20.606 "read": true, 00:10:20.606 "reset": true, 00:10:20.606 "unmap": true, 00:10:20.606 "write": true, 00:10:20.606 "write_zeroes": true 00:10:20.606 }, 00:10:20.606 "uuid": "b7323a76-b679-4e1d-89e0-456c6b4ed4fb", 00:10:20.606 "zoned": false 00:10:20.606 } 00:10:20.606 ]' 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:10:20.606 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:20.865 08:50:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:20.865 08:50:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.865 08:50:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:10:20.865 08:50:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.865 08:50:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:20.865 08:50:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:22.872 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:23.135 08:50:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.072 ************************************ 00:10:24.072 START TEST filesystem_in_capsule_ext4 00:10:24.072 ************************************ 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:10:24.072 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:10:24.073 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:10:24.073 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:24.073 mke2fs 1.46.5 (30-Dec-2021) 00:10:24.073 Discarding device blocks: 0/522240 done 00:10:24.073 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:24.073 Filesystem UUID: 7de4f78e-fdd1-4a11-afdc-4ac9e8d5d264 00:10:24.073 Superblock backups stored on blocks: 00:10:24.073 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:24.073 00:10:24.073 Allocating group tables: 0/64 done 00:10:24.073 Writing inode tables: 0/64 done 00:10:24.073 Creating journal (8192 blocks): done 00:10:24.073 Writing superblocks and filesystem accounting information: 0/64 done 00:10:24.073 00:10:24.073 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:10:24.073 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65003 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:24.332 00:10:24.332 real 0m0.291s 00:10:24.332 user 0m0.017s 00:10:24.332 sys 0m0.050s 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:24.332 ************************************ 00:10:24.332 END TEST filesystem_in_capsule_ext4 00:10:24.332 ************************************ 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.332 ************************************ 00:10:24.332 START TEST filesystem_in_capsule_btrfs 00:10:24.332 ************************************ 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:10:24.332 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:24.591 btrfs-progs v6.6.2 00:10:24.591 See https://btrfs.readthedocs.io for more information. 00:10:24.591 00:10:24.591 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:24.591 NOTE: several default settings have changed in version 5.15, please make sure 00:10:24.591 this does not affect your deployments: 00:10:24.591 - DUP for metadata (-m dup) 00:10:24.591 - enabled no-holes (-O no-holes) 00:10:24.591 - enabled free-space-tree (-R free-space-tree) 00:10:24.591 00:10:24.591 Label: (null) 00:10:24.591 UUID: 2ee2bbd4-887d-430d-b3fe-d8dc4569fb2e 00:10:24.591 Node size: 16384 00:10:24.591 Sector size: 4096 00:10:24.591 Filesystem size: 510.00MiB 00:10:24.591 Block group profiles: 00:10:24.591 Data: single 8.00MiB 00:10:24.591 Metadata: DUP 32.00MiB 00:10:24.591 System: DUP 8.00MiB 00:10:24.591 SSD detected: yes 00:10:24.591 Zoned device: no 00:10:24.591 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:24.591 Runtime features: free-space-tree 00:10:24.591 Checksum: crc32c 00:10:24.591 Number of devices: 1 00:10:24.591 Devices: 00:10:24.591 ID SIZE PATH 00:10:24.591 1 510.00MiB /dev/nvme0n1p1 00:10:24.591 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65003 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:24.591 ************************************ 00:10:24.591 END TEST filesystem_in_capsule_btrfs 00:10:24.591 ************************************ 00:10:24.591 00:10:24.591 real 0m0.172s 00:10:24.591 user 0m0.029s 00:10:24.591 sys 0m0.044s 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.591 ************************************ 00:10:24.591 START TEST filesystem_in_capsule_xfs 00:10:24.591 ************************************ 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:10:24.591 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:10:24.592 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:10:24.592 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:10:24.592 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:10:24.592 08:50:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:24.592 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:24.592 = sectsz=512 attr=2, projid32bit=1 00:10:24.592 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:24.592 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:24.592 data = bsize=4096 blocks=130560, imaxpct=25 00:10:24.592 = sunit=0 swidth=0 blks 00:10:24.592 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:24.592 log =internal log bsize=4096 blocks=16384, version=2 00:10:24.592 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:24.592 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:25.526 Discarding blocks...Done. 00:10:25.526 08:50:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:10:25.526 08:50:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65003 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.425 ************************************ 00:10:27.425 END TEST filesystem_in_capsule_xfs 00:10:27.425 ************************************ 00:10:27.425 00:10:27.425 real 0m2.592s 00:10:27.425 user 0m0.021s 00:10:27.425 sys 0m0.052s 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65003 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 65003 ']' 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 65003 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65003 00:10:27.425 killing process with pid 65003 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65003' 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 65003 00:10:27.425 [2024-05-15 08:50:43.473043] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:27.425 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 65003 00:10:27.712 ************************************ 00:10:27.712 END TEST nvmf_filesystem_in_capsule 00:10:27.712 ************************************ 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:27.712 00:10:27.712 real 0m7.523s 00:10:27.712 user 0m28.106s 00:10:27.712 sys 0m1.326s 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.712 rmmod nvme_tcp 00:10:27.712 rmmod nvme_fabrics 00:10:27.712 rmmod nvme_keyring 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.712 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.713 08:50:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.713 08:50:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.713 08:50:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:27.713 ************************************ 00:10:27.713 END TEST nvmf_filesystem 00:10:27.713 ************************************ 00:10:27.713 00:10:27.713 real 0m17.393s 00:10:27.713 user 1m2.465s 00:10:27.713 sys 0m3.231s 00:10:27.713 08:50:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:27.713 08:50:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.972 08:50:43 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:27.972 08:50:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:27.972 08:50:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:27.972 08:50:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:27.972 ************************************ 00:10:27.972 START TEST nvmf_target_discovery 00:10:27.972 ************************************ 00:10:27.972 08:50:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:27.972 * Looking for test storage... 00:10:27.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:27.972 Cannot find device "nvmf_tgt_br" 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:27.972 Cannot find device "nvmf_tgt_br2" 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:10:27.972 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:27.973 Cannot find device "nvmf_tgt_br" 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:27.973 Cannot find device "nvmf_tgt_br2" 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:27.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:27.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:27.973 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:28.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:10:28.231 00:10:28.231 --- 10.0.0.2 ping statistics --- 00:10:28.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.231 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:28.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:28.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:28.231 00:10:28.231 --- 10.0.0.3 ping statistics --- 00:10:28.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.231 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:28.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:28.231 00:10:28.231 --- 10.0.0.1 ping statistics --- 00:10:28.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.231 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:28.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=65444 00:10:28.231 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 65444 00:10:28.232 08:50:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:28.232 08:50:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 65444 ']' 00:10:28.232 08:50:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.232 08:50:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:28.232 08:50:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.232 08:50:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:28.232 08:50:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:28.489 [2024-05-15 08:50:44.489265] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:10:28.489 [2024-05-15 08:50:44.489393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.490 [2024-05-15 08:50:44.631227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.490 [2024-05-15 08:50:44.716286] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.490 [2024-05-15 08:50:44.716396] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.490 [2024-05-15 08:50:44.716417] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.490 [2024-05-15 08:50:44.716430] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.490 [2024-05-15 08:50:44.716442] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.490 [2024-05-15 08:50:44.717213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.490 [2024-05-15 08:50:44.717306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.490 [2024-05-15 08:50:44.717388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.490 [2024-05-15 08:50:44.717404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.421 [2024-05-15 08:50:45.498922] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.421 Null1 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.421 [2024-05-15 08:50:45.552680] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:29.421 [2024-05-15 08:50:45.552972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:29.421 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 Null2 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 Null3 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 Null4 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.422 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.680 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.680 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:29.680 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -a 10.0.0.2 -s 4420 00:10:29.681 00:10:29.681 Discovery Log Number of Records 6, Generation counter 6 00:10:29.681 =====Discovery Log Entry 0====== 00:10:29.681 trtype: tcp 00:10:29.681 adrfam: ipv4 00:10:29.681 subtype: current discovery subsystem 00:10:29.681 treq: not required 00:10:29.681 portid: 0 00:10:29.681 trsvcid: 4420 00:10:29.681 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:29.681 traddr: 10.0.0.2 00:10:29.681 eflags: explicit discovery connections, duplicate discovery information 00:10:29.681 sectype: none 00:10:29.681 =====Discovery Log Entry 1====== 00:10:29.681 trtype: tcp 00:10:29.681 adrfam: ipv4 00:10:29.681 subtype: nvme subsystem 00:10:29.681 treq: not required 00:10:29.681 portid: 0 00:10:29.681 trsvcid: 4420 00:10:29.681 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:29.681 traddr: 10.0.0.2 00:10:29.681 eflags: none 00:10:29.681 sectype: none 00:10:29.681 =====Discovery Log Entry 2====== 00:10:29.681 trtype: tcp 00:10:29.681 adrfam: ipv4 00:10:29.681 subtype: nvme subsystem 00:10:29.681 treq: not required 00:10:29.681 portid: 0 00:10:29.681 trsvcid: 4420 00:10:29.681 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:29.681 traddr: 10.0.0.2 00:10:29.681 eflags: none 00:10:29.681 sectype: none 00:10:29.681 =====Discovery Log Entry 3====== 00:10:29.681 trtype: tcp 00:10:29.681 adrfam: ipv4 00:10:29.681 subtype: nvme subsystem 00:10:29.681 treq: not required 00:10:29.681 portid: 0 00:10:29.681 trsvcid: 4420 00:10:29.681 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:29.681 traddr: 10.0.0.2 00:10:29.681 eflags: none 00:10:29.681 sectype: none 00:10:29.681 =====Discovery Log Entry 4====== 00:10:29.681 trtype: tcp 00:10:29.681 adrfam: ipv4 00:10:29.681 subtype: nvme subsystem 00:10:29.681 treq: not required 00:10:29.681 portid: 0 00:10:29.681 trsvcid: 4420 00:10:29.681 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:29.681 traddr: 10.0.0.2 00:10:29.681 eflags: none 00:10:29.681 sectype: none 00:10:29.681 =====Discovery Log Entry 5====== 00:10:29.681 trtype: tcp 00:10:29.681 adrfam: ipv4 00:10:29.681 subtype: discovery subsystem referral 00:10:29.681 treq: not required 00:10:29.681 portid: 0 00:10:29.681 trsvcid: 4430 00:10:29.681 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:29.681 traddr: 10.0.0.2 00:10:29.681 eflags: none 00:10:29.681 sectype: none 00:10:29.681 Perform nvmf subsystem discovery via RPC 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 [ 00:10:29.681 { 00:10:29.681 "allow_any_host": true, 00:10:29.681 "hosts": [], 00:10:29.681 "listen_addresses": [ 00:10:29.681 { 00:10:29.681 "adrfam": "IPv4", 00:10:29.681 "traddr": "10.0.0.2", 00:10:29.681 "trsvcid": "4420", 00:10:29.681 "trtype": "TCP" 00:10:29.681 } 00:10:29.681 ], 00:10:29.681 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:29.681 "subtype": "Discovery" 00:10:29.681 }, 00:10:29.681 { 00:10:29.681 "allow_any_host": true, 00:10:29.681 "hosts": [], 00:10:29.681 "listen_addresses": [ 00:10:29.681 { 00:10:29.681 "adrfam": "IPv4", 00:10:29.681 "traddr": "10.0.0.2", 00:10:29.681 "trsvcid": "4420", 00:10:29.681 "trtype": "TCP" 00:10:29.681 } 00:10:29.681 ], 00:10:29.681 "max_cntlid": 65519, 00:10:29.681 "max_namespaces": 32, 00:10:29.681 "min_cntlid": 1, 00:10:29.681 "model_number": "SPDK bdev Controller", 00:10:29.681 "namespaces": [ 00:10:29.681 { 00:10:29.681 "bdev_name": "Null1", 00:10:29.681 "name": "Null1", 00:10:29.681 "nguid": "E1B2DB204D04485CAAD690FCF532E4ED", 00:10:29.681 "nsid": 1, 00:10:29.681 "uuid": "e1b2db20-4d04-485c-aad6-90fcf532e4ed" 00:10:29.681 } 00:10:29.681 ], 00:10:29.681 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.681 "serial_number": "SPDK00000000000001", 00:10:29.681 "subtype": "NVMe" 00:10:29.681 }, 00:10:29.681 { 00:10:29.681 "allow_any_host": true, 00:10:29.681 "hosts": [], 00:10:29.681 "listen_addresses": [ 00:10:29.681 { 00:10:29.681 "adrfam": "IPv4", 00:10:29.681 "traddr": "10.0.0.2", 00:10:29.681 "trsvcid": "4420", 00:10:29.681 "trtype": "TCP" 00:10:29.681 } 00:10:29.681 ], 00:10:29.681 "max_cntlid": 65519, 00:10:29.681 "max_namespaces": 32, 00:10:29.681 "min_cntlid": 1, 00:10:29.681 "model_number": "SPDK bdev Controller", 00:10:29.681 "namespaces": [ 00:10:29.681 { 00:10:29.681 "bdev_name": "Null2", 00:10:29.681 "name": "Null2", 00:10:29.681 "nguid": "4D7FCD7E1BCB433D8ABE2CF7D57EB807", 00:10:29.681 "nsid": 1, 00:10:29.681 "uuid": "4d7fcd7e-1bcb-433d-8abe-2cf7d57eb807" 00:10:29.681 } 00:10:29.681 ], 00:10:29.681 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:29.681 "serial_number": "SPDK00000000000002", 00:10:29.681 "subtype": "NVMe" 00:10:29.681 }, 00:10:29.681 { 00:10:29.681 "allow_any_host": true, 00:10:29.681 "hosts": [], 00:10:29.681 "listen_addresses": [ 00:10:29.681 { 00:10:29.681 "adrfam": "IPv4", 00:10:29.681 "traddr": "10.0.0.2", 00:10:29.681 "trsvcid": "4420", 00:10:29.681 "trtype": "TCP" 00:10:29.681 } 00:10:29.681 ], 00:10:29.681 "max_cntlid": 65519, 00:10:29.681 "max_namespaces": 32, 00:10:29.681 "min_cntlid": 1, 00:10:29.681 "model_number": "SPDK bdev Controller", 00:10:29.681 "namespaces": [ 00:10:29.681 { 00:10:29.681 "bdev_name": "Null3", 00:10:29.681 "name": "Null3", 00:10:29.681 "nguid": "E7B183425980473495610B3644712B4A", 00:10:29.681 "nsid": 1, 00:10:29.681 "uuid": "e7b18342-5980-4734-9561-0b3644712b4a" 00:10:29.681 } 00:10:29.681 ], 00:10:29.681 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:29.681 "serial_number": "SPDK00000000000003", 00:10:29.681 "subtype": "NVMe" 00:10:29.681 }, 00:10:29.681 { 00:10:29.681 "allow_any_host": true, 00:10:29.681 "hosts": [], 00:10:29.681 "listen_addresses": [ 00:10:29.681 { 00:10:29.681 "adrfam": "IPv4", 00:10:29.681 "traddr": "10.0.0.2", 00:10:29.681 "trsvcid": "4420", 00:10:29.681 "trtype": "TCP" 00:10:29.681 } 00:10:29.681 ], 00:10:29.681 "max_cntlid": 65519, 00:10:29.681 "max_namespaces": 32, 00:10:29.681 "min_cntlid": 1, 00:10:29.681 "model_number": "SPDK bdev Controller", 00:10:29.681 "namespaces": [ 00:10:29.681 { 00:10:29.681 "bdev_name": "Null4", 00:10:29.681 "name": "Null4", 00:10:29.681 "nguid": "EE3050F983104FECA8E276B6D65E7550", 00:10:29.681 "nsid": 1, 00:10:29.681 "uuid": "ee3050f9-8310-4fec-a8e2-76b6d65e7550" 00:10:29.681 } 00:10:29.681 ], 00:10:29.681 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:29.681 "serial_number": "SPDK00000000000004", 00:10:29.681 "subtype": "NVMe" 00:10:29.681 } 00:10:29.681 ] 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.681 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:29.682 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.940 rmmod nvme_tcp 00:10:29.940 rmmod nvme_fabrics 00:10:29.940 rmmod nvme_keyring 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 65444 ']' 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 65444 00:10:29.940 08:50:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 65444 ']' 00:10:29.940 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 65444 00:10:29.940 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:10:29.940 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:29.940 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65444 00:10:29.940 killing process with pid 65444 00:10:29.940 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:29.940 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:29.940 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65444' 00:10:29.940 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 65444 00:10:29.940 [2024-05-15 08:50:46.029159] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:29.940 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 65444 00:10:30.200 08:50:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:30.200 08:50:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:30.200 08:50:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:30.200 08:50:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:30.200 08:50:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:30.200 08:50:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.200 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.200 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.200 08:50:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:30.200 00:10:30.200 real 0m2.285s 00:10:30.200 user 0m6.292s 00:10:30.200 sys 0m0.527s 00:10:30.200 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:30.200 08:50:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.200 ************************************ 00:10:30.200 END TEST nvmf_target_discovery 00:10:30.200 ************************************ 00:10:30.200 08:50:46 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:30.200 08:50:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:30.200 08:50:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:30.200 08:50:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:30.200 ************************************ 00:10:30.200 START TEST nvmf_referrals 00:10:30.200 ************************************ 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:30.200 * Looking for test storage... 00:10:30.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:30.200 Cannot find device "nvmf_tgt_br" 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:10:30.200 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.459 Cannot find device "nvmf_tgt_br2" 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:30.459 Cannot find device "nvmf_tgt_br" 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:30.459 Cannot find device "nvmf_tgt_br2" 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:30.459 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:30.459 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:30.459 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:30.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:10:30.718 00:10:30.718 --- 10.0.0.2 ping statistics --- 00:10:30.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.718 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:30.718 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:30.718 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:10:30.718 00:10:30.718 --- 10.0.0.3 ping statistics --- 00:10:30.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.718 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:30.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:10:30.718 00:10:30.718 --- 10.0.0.1 ping statistics --- 00:10:30.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.718 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=65666 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 65666 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 65666 ']' 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:30.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:30.718 08:50:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.718 [2024-05-15 08:50:46.836510] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:10:30.718 [2024-05-15 08:50:46.836619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.977 [2024-05-15 08:50:46.972008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.977 [2024-05-15 08:50:47.032240] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.977 [2024-05-15 08:50:47.032295] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.977 [2024-05-15 08:50:47.032307] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.977 [2024-05-15 08:50:47.032315] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.977 [2024-05-15 08:50:47.032323] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.977 [2024-05-15 08:50:47.032416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.977 [2024-05-15 08:50:47.032576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.977 [2024-05-15 08:50:47.032892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.977 [2024-05-15 08:50:47.032909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 [2024-05-15 08:50:47.893277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 [2024-05-15 08:50:47.920767] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:31.911 [2024-05-15 08:50:47.921420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 08:50:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:31.911 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:32.169 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:32.170 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:32.429 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.689 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:32.948 08:50:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.948 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:32.948 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:32.948 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:32.948 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:32.948 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:32.948 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:32.948 08:50:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:32.948 08:50:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:32.948 08:50:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.949 rmmod nvme_tcp 00:10:32.949 rmmod nvme_fabrics 00:10:32.949 rmmod nvme_keyring 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 65666 ']' 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 65666 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 65666 ']' 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 65666 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65666 00:10:32.949 killing process with pid 65666 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65666' 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 65666 00:10:32.949 [2024-05-15 08:50:49.149613] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:32.949 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 65666 00:10:33.208 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:33.208 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:33.208 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:33.208 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:33.208 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:33.208 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.208 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.208 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.208 08:50:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:33.208 00:10:33.208 real 0m3.077s 00:10:33.208 user 0m10.245s 00:10:33.208 sys 0m0.772s 00:10:33.208 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:33.208 08:50:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.208 ************************************ 00:10:33.208 END TEST nvmf_referrals 00:10:33.208 ************************************ 00:10:33.208 08:50:49 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:33.208 08:50:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:33.208 08:50:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:33.208 08:50:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:33.208 ************************************ 00:10:33.208 START TEST nvmf_connect_disconnect 00:10:33.208 ************************************ 00:10:33.208 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:33.467 * Looking for test storage... 00:10:33.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:33.467 Cannot find device "nvmf_tgt_br" 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:33.467 Cannot find device "nvmf_tgt_br2" 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:33.467 Cannot find device "nvmf_tgt_br" 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:33.467 Cannot find device "nvmf_tgt_br2" 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:33.467 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:33.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.468 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:10:33.468 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:33.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.468 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:10:33.468 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:33.468 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:33.468 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:33.468 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:33.468 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:33.468 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:33.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:10:33.726 00:10:33.726 --- 10.0.0.2 ping statistics --- 00:10:33.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.726 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:33.726 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:33.726 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:10:33.726 00:10:33.726 --- 10.0.0.3 ping statistics --- 00:10:33.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.726 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:33.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:33.726 00:10:33.726 --- 10.0.0.1 ping statistics --- 00:10:33.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.726 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=65970 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 65970 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 65970 ']' 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.726 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:33.727 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.727 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:33.727 08:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:33.985 [2024-05-15 08:50:49.966358] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:10:33.985 [2024-05-15 08:50:49.966458] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.985 [2024-05-15 08:50:50.102462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.985 [2024-05-15 08:50:50.163084] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.985 [2024-05-15 08:50:50.163145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.985 [2024-05-15 08:50:50.163158] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.985 [2024-05-15 08:50:50.163166] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.985 [2024-05-15 08:50:50.163174] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.985 [2024-05-15 08:50:50.163270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.985 [2024-05-15 08:50:50.163507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.985 [2024-05-15 08:50:50.163890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.985 [2024-05-15 08:50:50.163941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.922 08:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:34.922 08:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:10:34.922 08:50:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:34.922 08:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:34.922 08:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.922 [2024-05-15 08:50:51.026751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.922 [2024-05-15 08:50:51.104726] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:34.922 [2024-05-15 08:50:51.105028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:34.922 08:50:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:37.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.401 rmmod nvme_tcp 00:10:46.401 rmmod nvme_fabrics 00:10:46.401 rmmod nvme_keyring 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 65970 ']' 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 65970 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 65970 ']' 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 65970 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65970 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:46.401 killing process with pid 65970 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65970' 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 65970 00:10:46.401 [2024-05-15 08:51:02.416218] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 65970 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.401 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.660 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:46.660 00:10:46.660 real 0m13.239s 00:10:46.660 user 0m48.532s 00:10:46.660 sys 0m1.901s 00:10:46.660 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:46.660 08:51:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 ************************************ 00:10:46.661 END TEST nvmf_connect_disconnect 00:10:46.661 ************************************ 00:10:46.661 08:51:02 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:46.661 08:51:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:46.661 08:51:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:46.661 08:51:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 ************************************ 00:10:46.661 START TEST nvmf_multitarget 00:10:46.661 ************************************ 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:46.661 * Looking for test storage... 00:10:46.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:46.661 Cannot find device "nvmf_tgt_br" 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.661 Cannot find device "nvmf_tgt_br2" 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:46.661 Cannot find device "nvmf_tgt_br" 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:46.661 Cannot find device "nvmf_tgt_br2" 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:10:46.661 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:46.919 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:46.919 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.920 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:10:46.920 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.920 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:10:46.920 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:46.920 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:46.920 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:46.920 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:46.920 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:46.920 08:51:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:46.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:10:46.920 00:10:46.920 --- 10.0.0.2 ping statistics --- 00:10:46.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.920 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:46.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:46.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:10:46.920 00:10:46.920 --- 10.0.0.3 ping statistics --- 00:10:46.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.920 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:46.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:46.920 00:10:46.920 --- 10.0.0.1 ping statistics --- 00:10:46.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.920 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:46.920 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=66374 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 66374 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 66374 ']' 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:47.178 08:51:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:47.178 [2024-05-15 08:51:03.231496] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:10:47.178 [2024-05-15 08:51:03.231617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.178 [2024-05-15 08:51:03.377436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.436 [2024-05-15 08:51:03.449095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.436 [2024-05-15 08:51:03.449349] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.436 [2024-05-15 08:51:03.449512] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.436 [2024-05-15 08:51:03.449699] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.436 [2024-05-15 08:51:03.449831] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.436 [2024-05-15 08:51:03.449977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.436 [2024-05-15 08:51:03.450109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.436 [2024-05-15 08:51:03.450714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.436 [2024-05-15 08:51:03.450726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.001 08:51:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:48.001 08:51:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:10:48.001 08:51:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:48.001 08:51:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.001 08:51:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:48.259 08:51:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.259 08:51:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:48.259 08:51:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:48.259 08:51:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:48.259 08:51:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:48.259 08:51:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:48.517 "nvmf_tgt_1" 00:10:48.517 08:51:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:48.517 "nvmf_tgt_2" 00:10:48.517 08:51:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:48.517 08:51:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:48.775 08:51:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:48.775 08:51:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:48.775 true 00:10:48.775 08:51:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:49.032 true 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:49.032 rmmod nvme_tcp 00:10:49.032 rmmod nvme_fabrics 00:10:49.032 rmmod nvme_keyring 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 66374 ']' 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 66374 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 66374 ']' 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 66374 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:10:49.032 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:49.291 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66374 00:10:49.291 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:49.291 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:49.291 killing process with pid 66374 00:10:49.291 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66374' 00:10:49.291 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 66374 00:10:49.291 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 66374 00:10:49.292 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:49.292 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:49.292 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:49.292 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:49.292 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:49.292 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.292 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.292 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.292 08:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:49.292 ************************************ 00:10:49.292 END TEST nvmf_multitarget 00:10:49.292 ************************************ 00:10:49.292 00:10:49.292 real 0m2.794s 00:10:49.292 user 0m9.224s 00:10:49.292 sys 0m0.654s 00:10:49.292 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:49.292 08:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:49.551 08:51:05 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:49.551 08:51:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:49.551 08:51:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:49.551 08:51:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:49.551 ************************************ 00:10:49.551 START TEST nvmf_rpc 00:10:49.551 ************************************ 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:49.551 * Looking for test storage... 00:10:49.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.551 08:51:05 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:49.552 Cannot find device "nvmf_tgt_br" 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:49.552 Cannot find device "nvmf_tgt_br2" 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:49.552 Cannot find device "nvmf_tgt_br" 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:49.552 Cannot find device "nvmf_tgt_br2" 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:10:49.552 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:49.553 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:49.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:49.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:49.811 08:51:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:49.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:10:49.811 00:10:49.811 --- 10.0.0.2 ping statistics --- 00:10:49.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.811 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:49.811 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:49.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:49.812 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:10:49.812 00:10:49.812 --- 10.0.0.3 ping statistics --- 00:10:49.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.812 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:49.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:49.812 00:10:49.812 --- 10.0.0.1 ping statistics --- 00:10:49.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.812 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=66605 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 66605 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 66605 ']' 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:49.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:49.812 08:51:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.070 [2024-05-15 08:51:06.096189] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:10:50.070 [2024-05-15 08:51:06.096286] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.070 [2024-05-15 08:51:06.236220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.329 [2024-05-15 08:51:06.306964] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.329 [2024-05-15 08:51:06.307020] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.329 [2024-05-15 08:51:06.307036] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.329 [2024-05-15 08:51:06.307046] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.329 [2024-05-15 08:51:06.307055] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.329 [2024-05-15 08:51:06.307250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.329 [2024-05-15 08:51:06.307376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.329 [2024-05-15 08:51:06.308026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.329 [2024-05-15 08:51:06.308033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:51.265 "poll_groups": [ 00:10:51.265 { 00:10:51.265 "admin_qpairs": 0, 00:10:51.265 "completed_nvme_io": 0, 00:10:51.265 "current_admin_qpairs": 0, 00:10:51.265 "current_io_qpairs": 0, 00:10:51.265 "io_qpairs": 0, 00:10:51.265 "name": "nvmf_tgt_poll_group_000", 00:10:51.265 "pending_bdev_io": 0, 00:10:51.265 "transports": [] 00:10:51.265 }, 00:10:51.265 { 00:10:51.265 "admin_qpairs": 0, 00:10:51.265 "completed_nvme_io": 0, 00:10:51.265 "current_admin_qpairs": 0, 00:10:51.265 "current_io_qpairs": 0, 00:10:51.265 "io_qpairs": 0, 00:10:51.265 "name": "nvmf_tgt_poll_group_001", 00:10:51.265 "pending_bdev_io": 0, 00:10:51.265 "transports": [] 00:10:51.265 }, 00:10:51.265 { 00:10:51.265 "admin_qpairs": 0, 00:10:51.265 "completed_nvme_io": 0, 00:10:51.265 "current_admin_qpairs": 0, 00:10:51.265 "current_io_qpairs": 0, 00:10:51.265 "io_qpairs": 0, 00:10:51.265 "name": "nvmf_tgt_poll_group_002", 00:10:51.265 "pending_bdev_io": 0, 00:10:51.265 "transports": [] 00:10:51.265 }, 00:10:51.265 { 00:10:51.265 "admin_qpairs": 0, 00:10:51.265 "completed_nvme_io": 0, 00:10:51.265 "current_admin_qpairs": 0, 00:10:51.265 "current_io_qpairs": 0, 00:10:51.265 "io_qpairs": 0, 00:10:51.265 "name": "nvmf_tgt_poll_group_003", 00:10:51.265 "pending_bdev_io": 0, 00:10:51.265 "transports": [] 00:10:51.265 } 00:10:51.265 ], 00:10:51.265 "tick_rate": 2200000000 00:10:51.265 }' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.265 [2024-05-15 08:51:07.295680] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:51.265 "poll_groups": [ 00:10:51.265 { 00:10:51.265 "admin_qpairs": 0, 00:10:51.265 "completed_nvme_io": 0, 00:10:51.265 "current_admin_qpairs": 0, 00:10:51.265 "current_io_qpairs": 0, 00:10:51.265 "io_qpairs": 0, 00:10:51.265 "name": "nvmf_tgt_poll_group_000", 00:10:51.265 "pending_bdev_io": 0, 00:10:51.265 "transports": [ 00:10:51.265 { 00:10:51.265 "trtype": "TCP" 00:10:51.265 } 00:10:51.265 ] 00:10:51.265 }, 00:10:51.265 { 00:10:51.265 "admin_qpairs": 0, 00:10:51.265 "completed_nvme_io": 0, 00:10:51.265 "current_admin_qpairs": 0, 00:10:51.265 "current_io_qpairs": 0, 00:10:51.265 "io_qpairs": 0, 00:10:51.265 "name": "nvmf_tgt_poll_group_001", 00:10:51.265 "pending_bdev_io": 0, 00:10:51.265 "transports": [ 00:10:51.265 { 00:10:51.265 "trtype": "TCP" 00:10:51.265 } 00:10:51.265 ] 00:10:51.265 }, 00:10:51.265 { 00:10:51.265 "admin_qpairs": 0, 00:10:51.265 "completed_nvme_io": 0, 00:10:51.265 "current_admin_qpairs": 0, 00:10:51.265 "current_io_qpairs": 0, 00:10:51.265 "io_qpairs": 0, 00:10:51.265 "name": "nvmf_tgt_poll_group_002", 00:10:51.265 "pending_bdev_io": 0, 00:10:51.265 "transports": [ 00:10:51.265 { 00:10:51.265 "trtype": "TCP" 00:10:51.265 } 00:10:51.265 ] 00:10:51.265 }, 00:10:51.265 { 00:10:51.265 "admin_qpairs": 0, 00:10:51.265 "completed_nvme_io": 0, 00:10:51.265 "current_admin_qpairs": 0, 00:10:51.265 "current_io_qpairs": 0, 00:10:51.265 "io_qpairs": 0, 00:10:51.265 "name": "nvmf_tgt_poll_group_003", 00:10:51.265 "pending_bdev_io": 0, 00:10:51.265 "transports": [ 00:10:51.265 { 00:10:51.265 "trtype": "TCP" 00:10:51.265 } 00:10:51.265 ] 00:10:51.265 } 00:10:51.265 ], 00:10:51.265 "tick_rate": 2200000000 00:10:51.265 }' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.265 Malloc1 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.265 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.266 [2024-05-15 08:51:07.491922] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:51.266 [2024-05-15 08:51:07.492196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe -a 10.0.0.2 -s 4420 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe -a 10.0.0.2 -s 4420 00:10:51.266 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe -a 10.0.0.2 -s 4420 00:10:51.524 [2024-05-15 08:51:07.514330] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe' 00:10:51.524 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:51.524 could not add new controller: failed to write to nvme-fabrics device 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:51.524 08:51:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.056 [2024-05-15 08:51:09.805596] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe' 00:10:54.056 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:54.056 could not add new controller: failed to write to nvme-fabrics device 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:54.056 08:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:10:55.958 08:51:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:55.958 08:51:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:55.958 08:51:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.958 [2024-05-15 08:51:12.099336] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.958 08:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.217 08:51:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.217 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:10:56.217 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.217 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:56.217 08:51:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:10:58.115 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:58.115 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:58.116 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.116 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:58.116 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.116 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:10:58.116 08:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.374 [2024-05-15 08:51:14.398468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.374 08:51:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:58.375 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:10:58.375 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.375 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:58.375 08:51:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.902 [2024-05-15 08:51:16.703093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.902 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:00.903 08:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.856 08:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.856 [2024-05-15 08:51:18.998480] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.856 08:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.856 08:51:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:02.856 08:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.856 08:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.856 08:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.856 08:51:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:02.856 08:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.856 08:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.856 08:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.856 08:51:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.115 08:51:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.115 08:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:11:03.115 08:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.115 08:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:03.115 08:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:11:05.017 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:05.017 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.017 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:05.017 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:05.017 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.017 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:11:05.017 08:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.275 [2024-05-15 08:51:21.309931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:05.275 08:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 [2024-05-15 08:51:23.713300] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 [2024-05-15 08:51:23.761360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 [2024-05-15 08:51:23.809401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 [2024-05-15 08:51:23.857447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.806 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 [2024-05-15 08:51:23.905508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:07.807 "poll_groups": [ 00:11:07.807 { 00:11:07.807 "admin_qpairs": 2, 00:11:07.807 "completed_nvme_io": 67, 00:11:07.807 "current_admin_qpairs": 0, 00:11:07.807 "current_io_qpairs": 0, 00:11:07.807 "io_qpairs": 16, 00:11:07.807 "name": "nvmf_tgt_poll_group_000", 00:11:07.807 "pending_bdev_io": 0, 00:11:07.807 "transports": [ 00:11:07.807 { 00:11:07.807 "trtype": "TCP" 00:11:07.807 } 00:11:07.807 ] 00:11:07.807 }, 00:11:07.807 { 00:11:07.807 "admin_qpairs": 3, 00:11:07.807 "completed_nvme_io": 66, 00:11:07.807 "current_admin_qpairs": 0, 00:11:07.807 "current_io_qpairs": 0, 00:11:07.807 "io_qpairs": 17, 00:11:07.807 "name": "nvmf_tgt_poll_group_001", 00:11:07.807 "pending_bdev_io": 0, 00:11:07.807 "transports": [ 00:11:07.807 { 00:11:07.807 "trtype": "TCP" 00:11:07.807 } 00:11:07.807 ] 00:11:07.807 }, 00:11:07.807 { 00:11:07.807 "admin_qpairs": 1, 00:11:07.807 "completed_nvme_io": 167, 00:11:07.807 "current_admin_qpairs": 0, 00:11:07.807 "current_io_qpairs": 0, 00:11:07.807 "io_qpairs": 19, 00:11:07.807 "name": "nvmf_tgt_poll_group_002", 00:11:07.807 "pending_bdev_io": 0, 00:11:07.807 "transports": [ 00:11:07.807 { 00:11:07.807 "trtype": "TCP" 00:11:07.807 } 00:11:07.807 ] 00:11:07.807 }, 00:11:07.807 { 00:11:07.807 "admin_qpairs": 1, 00:11:07.807 "completed_nvme_io": 120, 00:11:07.807 "current_admin_qpairs": 0, 00:11:07.807 "current_io_qpairs": 0, 00:11:07.807 "io_qpairs": 18, 00:11:07.807 "name": "nvmf_tgt_poll_group_003", 00:11:07.807 "pending_bdev_io": 0, 00:11:07.807 "transports": [ 00:11:07.807 { 00:11:07.807 "trtype": "TCP" 00:11:07.807 } 00:11:07.807 ] 00:11:07.807 } 00:11:07.807 ], 00:11:07.807 "tick_rate": 2200000000 00:11:07.807 }' 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:07.807 08:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.807 08:51:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:07.807 08:51:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:07.807 08:51:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:07.807 08:51:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:07.807 08:51:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.066 rmmod nvme_tcp 00:11:08.066 rmmod nvme_fabrics 00:11:08.066 rmmod nvme_keyring 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 66605 ']' 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 66605 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 66605 ']' 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 66605 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66605 00:11:08.066 killing process with pid 66605 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66605' 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 66605 00:11:08.066 [2024-05-15 08:51:24.176508] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:08.066 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 66605 00:11:08.390 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.390 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.390 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.390 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.390 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.390 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.390 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.390 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.390 08:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:08.390 00:11:08.390 real 0m18.852s 00:11:08.390 user 1m10.931s 00:11:08.390 sys 0m2.487s 00:11:08.390 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:08.390 08:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.390 ************************************ 00:11:08.390 END TEST nvmf_rpc 00:11:08.390 ************************************ 00:11:08.390 08:51:24 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:08.390 08:51:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:08.390 08:51:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:08.390 08:51:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.390 ************************************ 00:11:08.390 START TEST nvmf_invalid 00:11:08.390 ************************************ 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:08.390 * Looking for test storage... 00:11:08.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:08.390 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:08.391 Cannot find device "nvmf_tgt_br" 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:08.391 Cannot find device "nvmf_tgt_br2" 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:08.391 Cannot find device "nvmf_tgt_br" 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:11:08.391 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:08.663 Cannot find device "nvmf_tgt_br2" 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:08.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:11:08.663 00:11:08.663 --- 10.0.0.2 ping statistics --- 00:11:08.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.663 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:08.663 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:08.663 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:11:08.663 00:11:08.663 --- 10.0.0.3 ping statistics --- 00:11:08.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.663 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:08.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:11:08.663 00:11:08.663 --- 10.0.0.1 ping statistics --- 00:11:08.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.663 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.663 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67111 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67111 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 67111 ']' 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:08.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:08.922 08:51:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:08.922 [2024-05-15 08:51:24.960948] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:11:08.922 [2024-05-15 08:51:24.961050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.922 [2024-05-15 08:51:25.100923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.180 [2024-05-15 08:51:25.170892] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.180 [2024-05-15 08:51:25.170938] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.180 [2024-05-15 08:51:25.170951] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.180 [2024-05-15 08:51:25.170961] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.180 [2024-05-15 08:51:25.170970] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.180 [2024-05-15 08:51:25.171120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.180 [2024-05-15 08:51:25.171247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.180 [2024-05-15 08:51:25.171295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.180 [2024-05-15 08:51:25.171297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8026 00:11:10.116 [2024-05-15 08:51:26.310679] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/05/15 08:51:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8026 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:10.116 request: 00:11:10.116 { 00:11:10.116 "method": "nvmf_create_subsystem", 00:11:10.116 "params": { 00:11:10.116 "nqn": "nqn.2016-06.io.spdk:cnode8026", 00:11:10.116 "tgt_name": "foobar" 00:11:10.116 } 00:11:10.116 } 00:11:10.116 Got JSON-RPC error response 00:11:10.116 GoRPCClient: error on JSON-RPC call' 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/05/15 08:51:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8026 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:10.116 request: 00:11:10.116 { 00:11:10.116 "method": "nvmf_create_subsystem", 00:11:10.116 "params": { 00:11:10.116 "nqn": "nqn.2016-06.io.spdk:cnode8026", 00:11:10.116 "tgt_name": "foobar" 00:11:10.116 } 00:11:10.116 } 00:11:10.116 Got JSON-RPC error response 00:11:10.116 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:10.116 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13542 00:11:10.684 [2024-05-15 08:51:26.635013] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13542: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:10.684 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/05/15 08:51:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13542 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:10.684 request: 00:11:10.684 { 00:11:10.684 "method": "nvmf_create_subsystem", 00:11:10.684 "params": { 00:11:10.684 "nqn": "nqn.2016-06.io.spdk:cnode13542", 00:11:10.684 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:10.684 } 00:11:10.684 } 00:11:10.684 Got JSON-RPC error response 00:11:10.684 GoRPCClient: error on JSON-RPC call' 00:11:10.684 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/05/15 08:51:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13542 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:10.684 request: 00:11:10.684 { 00:11:10.684 "method": "nvmf_create_subsystem", 00:11:10.684 "params": { 00:11:10.684 "nqn": "nqn.2016-06.io.spdk:cnode13542", 00:11:10.684 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:10.684 } 00:11:10.684 } 00:11:10.684 Got JSON-RPC error response 00:11:10.684 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:10.684 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:10.684 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5879 00:11:10.943 [2024-05-15 08:51:26.955286] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5879: invalid model number 'SPDK_Controller' 00:11:10.943 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/05/15 08:51:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode5879], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:10.943 request: 00:11:10.943 { 00:11:10.943 "method": "nvmf_create_subsystem", 00:11:10.943 "params": { 00:11:10.943 "nqn": "nqn.2016-06.io.spdk:cnode5879", 00:11:10.943 "model_number": "SPDK_Controller\u001f" 00:11:10.943 } 00:11:10.943 } 00:11:10.943 Got JSON-RPC error response 00:11:10.943 GoRPCClient: error on JSON-RPC call' 00:11:10.943 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/05/15 08:51:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode5879], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:10.943 request: 00:11:10.943 { 00:11:10.943 "method": "nvmf_create_subsystem", 00:11:10.943 "params": { 00:11:10.943 "nqn": "nqn.2016-06.io.spdk:cnode5879", 00:11:10.943 "model_number": "SPDK_Controller\u001f" 00:11:10.943 } 00:11:10.943 } 00:11:10.943 Got JSON-RPC error response 00:11:10.943 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:10.943 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:10.943 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:10.943 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:10.943 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:10.943 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:10.943 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:10.943 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.943 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:10.944 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:10.944 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:10.944 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:10.944 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:10.944 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:10.944 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:10.944 08:51:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '!wNGn{^dQ^6IH7h!YUqc'\''' 00:11:10.944 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '!wNGn{^dQ^6IH7h!YUqc'\''' nqn.2016-06.io.spdk:cnode16436 00:11:11.204 [2024-05-15 08:51:27.339629] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16436: invalid serial number '!wNGn{^dQ^6IH7h!YUqc'' 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/05/15 08:51:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16436 serial_number:!wNGn{^dQ^6IH7h!YUqc'\''], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN !wNGn{^dQ^6IH7h!YUqc'\'' 00:11:11.204 request: 00:11:11.204 { 00:11:11.204 "method": "nvmf_create_subsystem", 00:11:11.204 "params": { 00:11:11.204 "nqn": "nqn.2016-06.io.spdk:cnode16436", 00:11:11.204 "serial_number": "!wNGn{^dQ^6IH7h!YUqc'\''" 00:11:11.204 } 00:11:11.204 } 00:11:11.204 Got JSON-RPC error response 00:11:11.204 GoRPCClient: error on JSON-RPC call' 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/05/15 08:51:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16436 serial_number:!wNGn{^dQ^6IH7h!YUqc'], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN !wNGn{^dQ^6IH7h!YUqc' 00:11:11.204 request: 00:11:11.204 { 00:11:11.204 "method": "nvmf_create_subsystem", 00:11:11.204 "params": { 00:11:11.204 "nqn": "nqn.2016-06.io.spdk:cnode16436", 00:11:11.204 "serial_number": "!wNGn{^dQ^6IH7h!YUqc'" 00:11:11.204 } 00:11:11.204 } 00:11:11.204 Got JSON-RPC error response 00:11:11.204 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.204 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.205 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:11.466 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'tZ*\UU+UPH?RZ\BD.1;pJE9z yeAG8et/u=f}^hZ' 00:11:11.467 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'tZ*\UU+UPH?RZ\BD.1;pJE9z yeAG8et/u=f}^hZ' nqn.2016-06.io.spdk:cnode28974 00:11:11.727 [2024-05-15 08:51:27.784021] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28974: invalid model number 'tZ*\UU+UPH?RZ\BD.1;pJE9z yeAG8et/u=f}^hZ' 00:11:11.727 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/05/15 08:51:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:tZ*\UU+UPH?RZ\BD.1;pJE9z yeAG8et/u=f}^hZ nqn:nqn.2016-06.io.spdk:cnode28974], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN tZ*\UU+UPH?RZ\BD.1;pJE9z yeAG8et/u=f}^hZ 00:11:11.727 request: 00:11:11.727 { 00:11:11.727 "method": "nvmf_create_subsystem", 00:11:11.727 "params": { 00:11:11.727 "nqn": "nqn.2016-06.io.spdk:cnode28974", 00:11:11.727 "model_number": "tZ*\\UU+UPH?RZ\\BD.1;pJE9z yeAG\u007f8et/u=f}^hZ" 00:11:11.727 } 00:11:11.727 } 00:11:11.727 Got JSON-RPC error response 00:11:11.727 GoRPCClient: error on JSON-RPC call' 00:11:11.727 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/05/15 08:51:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:tZ*\UU+UPH?RZ\BD.1;pJE9z yeAG8et/u=f}^hZ nqn:nqn.2016-06.io.spdk:cnode28974], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN tZ*\UU+UPH?RZ\BD.1;pJE9z yeAG8et/u=f}^hZ 00:11:11.727 request: 00:11:11.727 { 00:11:11.727 "method": "nvmf_create_subsystem", 00:11:11.727 "params": { 00:11:11.727 "nqn": "nqn.2016-06.io.spdk:cnode28974", 00:11:11.727 "model_number": "tZ*\\UU+UPH?RZ\\BD.1;pJE9z yeAG\u007f8et/u=f}^hZ" 00:11:11.727 } 00:11:11.727 } 00:11:11.727 Got JSON-RPC error response 00:11:11.727 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:11.727 08:51:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:11.985 [2024-05-15 08:51:28.072340] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.985 08:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:12.242 08:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:12.242 08:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:12.242 08:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:12.242 08:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:12.242 08:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:12.499 [2024-05-15 08:51:28.680370] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:12.499 [2024-05-15 08:51:28.680503] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:12.499 08:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/05/15 08:51:28 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:12.499 request: 00:11:12.499 { 00:11:12.499 "method": "nvmf_subsystem_remove_listener", 00:11:12.499 "params": { 00:11:12.499 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:12.499 "listen_address": { 00:11:12.499 "trtype": "tcp", 00:11:12.499 "traddr": "", 00:11:12.499 "trsvcid": "4421" 00:11:12.499 } 00:11:12.499 } 00:11:12.499 } 00:11:12.499 Got JSON-RPC error response 00:11:12.499 GoRPCClient: error on JSON-RPC call' 00:11:12.499 08:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/05/15 08:51:28 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:12.499 request: 00:11:12.499 { 00:11:12.499 "method": "nvmf_subsystem_remove_listener", 00:11:12.499 "params": { 00:11:12.499 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:12.499 "listen_address": { 00:11:12.499 "trtype": "tcp", 00:11:12.499 "traddr": "", 00:11:12.499 "trsvcid": "4421" 00:11:12.499 } 00:11:12.499 } 00:11:12.499 } 00:11:12.499 Got JSON-RPC error response 00:11:12.499 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:12.499 08:51:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16912 -i 0 00:11:13.064 [2024-05-15 08:51:28.996668] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16912: invalid cntlid range [0-65519] 00:11:13.064 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/05/15 08:51:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16912], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:13.064 request: 00:11:13.064 { 00:11:13.064 "method": "nvmf_create_subsystem", 00:11:13.064 "params": { 00:11:13.064 "nqn": "nqn.2016-06.io.spdk:cnode16912", 00:11:13.064 "min_cntlid": 0 00:11:13.064 } 00:11:13.064 } 00:11:13.064 Got JSON-RPC error response 00:11:13.064 GoRPCClient: error on JSON-RPC call' 00:11:13.064 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/05/15 08:51:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16912], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:13.064 request: 00:11:13.064 { 00:11:13.064 "method": "nvmf_create_subsystem", 00:11:13.064 "params": { 00:11:13.064 "nqn": "nqn.2016-06.io.spdk:cnode16912", 00:11:13.064 "min_cntlid": 0 00:11:13.064 } 00:11:13.064 } 00:11:13.064 Got JSON-RPC error response 00:11:13.064 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:13.064 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5855 -i 65520 00:11:13.064 [2024-05-15 08:51:29.248887] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5855: invalid cntlid range [65520-65519] 00:11:13.064 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/05/15 08:51:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode5855], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:13.064 request: 00:11:13.064 { 00:11:13.064 "method": "nvmf_create_subsystem", 00:11:13.064 "params": { 00:11:13.064 "nqn": "nqn.2016-06.io.spdk:cnode5855", 00:11:13.064 "min_cntlid": 65520 00:11:13.064 } 00:11:13.064 } 00:11:13.064 Got JSON-RPC error response 00:11:13.064 GoRPCClient: error on JSON-RPC call' 00:11:13.064 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/05/15 08:51:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode5855], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:13.064 request: 00:11:13.064 { 00:11:13.064 "method": "nvmf_create_subsystem", 00:11:13.064 "params": { 00:11:13.064 "nqn": "nqn.2016-06.io.spdk:cnode5855", 00:11:13.064 "min_cntlid": 65520 00:11:13.064 } 00:11:13.064 } 00:11:13.064 Got JSON-RPC error response 00:11:13.064 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:13.064 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11252 -I 0 00:11:13.321 [2024-05-15 08:51:29.489112] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11252: invalid cntlid range [1-0] 00:11:13.321 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/05/15 08:51:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode11252], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:13.321 request: 00:11:13.321 { 00:11:13.321 "method": "nvmf_create_subsystem", 00:11:13.321 "params": { 00:11:13.321 "nqn": "nqn.2016-06.io.spdk:cnode11252", 00:11:13.321 "max_cntlid": 0 00:11:13.321 } 00:11:13.321 } 00:11:13.321 Got JSON-RPC error response 00:11:13.321 GoRPCClient: error on JSON-RPC call' 00:11:13.321 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/05/15 08:51:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode11252], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:13.321 request: 00:11:13.321 { 00:11:13.321 "method": "nvmf_create_subsystem", 00:11:13.321 "params": { 00:11:13.321 "nqn": "nqn.2016-06.io.spdk:cnode11252", 00:11:13.321 "max_cntlid": 0 00:11:13.321 } 00:11:13.321 } 00:11:13.321 Got JSON-RPC error response 00:11:13.321 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:13.321 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27352 -I 65520 00:11:13.578 [2024-05-15 08:51:29.721332] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27352: invalid cntlid range [1-65520] 00:11:13.578 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/05/15 08:51:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode27352], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:13.578 request: 00:11:13.578 { 00:11:13.578 "method": "nvmf_create_subsystem", 00:11:13.578 "params": { 00:11:13.578 "nqn": "nqn.2016-06.io.spdk:cnode27352", 00:11:13.578 "max_cntlid": 65520 00:11:13.578 } 00:11:13.578 } 00:11:13.578 Got JSON-RPC error response 00:11:13.578 GoRPCClient: error on JSON-RPC call' 00:11:13.578 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/05/15 08:51:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode27352], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:13.578 request: 00:11:13.578 { 00:11:13.578 "method": "nvmf_create_subsystem", 00:11:13.578 "params": { 00:11:13.578 "nqn": "nqn.2016-06.io.spdk:cnode27352", 00:11:13.578 "max_cntlid": 65520 00:11:13.578 } 00:11:13.578 } 00:11:13.578 Got JSON-RPC error response 00:11:13.578 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:13.578 08:51:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5140 -i 6 -I 5 00:11:13.836 [2024-05-15 08:51:29.977580] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5140: invalid cntlid range [6-5] 00:11:13.836 08:51:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/05/15 08:51:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode5140], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:13.836 request: 00:11:13.836 { 00:11:13.836 "method": "nvmf_create_subsystem", 00:11:13.836 "params": { 00:11:13.836 "nqn": "nqn.2016-06.io.spdk:cnode5140", 00:11:13.836 "min_cntlid": 6, 00:11:13.836 "max_cntlid": 5 00:11:13.836 } 00:11:13.836 } 00:11:13.836 Got JSON-RPC error response 00:11:13.836 GoRPCClient: error on JSON-RPC call' 00:11:13.836 08:51:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/05/15 08:51:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode5140], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:13.836 request: 00:11:13.836 { 00:11:13.836 "method": "nvmf_create_subsystem", 00:11:13.836 "params": { 00:11:13.836 "nqn": "nqn.2016-06.io.spdk:cnode5140", 00:11:13.836 "min_cntlid": 6, 00:11:13.836 "max_cntlid": 5 00:11:13.836 } 00:11:13.836 } 00:11:13.836 Got JSON-RPC error response 00:11:13.836 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:13.836 08:51:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:14.094 { 00:11:14.094 "name": "foobar", 00:11:14.094 "method": "nvmf_delete_target", 00:11:14.094 "req_id": 1 00:11:14.094 } 00:11:14.094 Got JSON-RPC error response 00:11:14.094 response: 00:11:14.094 { 00:11:14.094 "code": -32602, 00:11:14.094 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:14.094 }' 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:14.094 { 00:11:14.094 "name": "foobar", 00:11:14.094 "method": "nvmf_delete_target", 00:11:14.094 "req_id": 1 00:11:14.094 } 00:11:14.094 Got JSON-RPC error response 00:11:14.094 response: 00:11:14.094 { 00:11:14.094 "code": -32602, 00:11:14.094 "message": "The specified target doesn't exist, cannot delete it." 00:11:14.094 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.094 rmmod nvme_tcp 00:11:14.094 rmmod nvme_fabrics 00:11:14.094 rmmod nvme_keyring 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 67111 ']' 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 67111 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 67111 ']' 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 67111 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67111 00:11:14.094 killing process with pid 67111 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67111' 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 67111 00:11:14.094 [2024-05-15 08:51:30.215828] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:14.094 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 67111 00:11:14.351 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:14.351 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:14.351 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:14.351 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.351 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:14.351 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.351 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.351 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.351 08:51:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:14.351 00:11:14.351 real 0m5.997s 00:11:14.351 user 0m24.327s 00:11:14.351 sys 0m1.215s 00:11:14.351 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:14.351 08:51:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:14.351 ************************************ 00:11:14.351 END TEST nvmf_invalid 00:11:14.351 ************************************ 00:11:14.351 08:51:30 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:14.351 08:51:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:14.351 08:51:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:14.351 08:51:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:14.351 ************************************ 00:11:14.351 START TEST nvmf_abort 00:11:14.351 ************************************ 00:11:14.351 08:51:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:14.351 * Looking for test storage... 00:11:14.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:14.351 08:51:30 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:14.609 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:14.610 Cannot find device "nvmf_tgt_br" 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:14.610 Cannot find device "nvmf_tgt_br2" 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:14.610 Cannot find device "nvmf_tgt_br" 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:14.610 Cannot find device "nvmf_tgt_br2" 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:14.610 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:14.610 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:14.610 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:14.886 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:14.886 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:14.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:11:14.887 00:11:14.887 --- 10.0.0.2 ping statistics --- 00:11:14.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.887 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:14.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:14.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:11:14.887 00:11:14.887 --- 10.0.0.3 ping statistics --- 00:11:14.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.887 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:14.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:14.887 00:11:14.887 --- 10.0.0.1 ping statistics --- 00:11:14.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.887 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=67626 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 67626 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 67626 ']' 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:14.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:14.887 08:51:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:14.887 [2024-05-15 08:51:30.986322] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:11:14.887 [2024-05-15 08:51:30.986420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.151 [2024-05-15 08:51:31.127270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:15.151 [2024-05-15 08:51:31.192111] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.151 [2024-05-15 08:51:31.192171] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.151 [2024-05-15 08:51:31.192186] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.151 [2024-05-15 08:51:31.192198] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.151 [2024-05-15 08:51:31.192208] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.151 [2024-05-15 08:51:31.192345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.151 [2024-05-15 08:51:31.192866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.151 [2024-05-15 08:51:31.192896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.718 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:15.718 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:11:15.718 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:15.718 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.718 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:15.976 08:51:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.976 08:51:31 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:15.976 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.976 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:15.976 [2024-05-15 08:51:31.988888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.976 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.976 08:51:31 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:15.976 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.976 08:51:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:15.976 Malloc0 00:11:15.976 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.976 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:15.976 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.976 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:15.976 Delay0 00:11:15.976 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.976 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:15.976 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.976 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:15.976 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:15.977 [2024-05-15 08:51:32.052844] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:15.977 [2024-05-15 08:51:32.053088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.977 08:51:32 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:16.235 [2024-05-15 08:51:32.257628] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:18.147 Initializing NVMe Controllers 00:11:18.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:18.147 controller IO queue size 128 less than required 00:11:18.147 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:18.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:18.147 Initialization complete. Launching workers. 00:11:18.147 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28888 00:11:18.147 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28953, failed to submit 62 00:11:18.147 success 28892, unsuccess 61, failed 0 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.147 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:18.147 rmmod nvme_tcp 00:11:18.147 rmmod nvme_fabrics 00:11:18.147 rmmod nvme_keyring 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 67626 ']' 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 67626 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 67626 ']' 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 67626 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67626 00:11:18.408 killing process with pid 67626 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67626' 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 67626 00:11:18.408 [2024-05-15 08:51:34.420952] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 67626 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:18.408 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.668 08:51:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:18.668 ************************************ 00:11:18.668 END TEST nvmf_abort 00:11:18.668 ************************************ 00:11:18.668 00:11:18.668 real 0m4.156s 00:11:18.668 user 0m12.123s 00:11:18.668 sys 0m0.962s 00:11:18.668 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:18.668 08:51:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:18.668 08:51:34 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:18.668 08:51:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:18.668 08:51:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:18.668 08:51:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:18.668 ************************************ 00:11:18.668 START TEST nvmf_ns_hotplug_stress 00:11:18.668 ************************************ 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:18.668 * Looking for test storage... 00:11:18.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.668 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:18.669 Cannot find device "nvmf_tgt_br" 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:18.669 Cannot find device "nvmf_tgt_br2" 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:18.669 Cannot find device "nvmf_tgt_br" 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:18.669 Cannot find device "nvmf_tgt_br2" 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:11:18.669 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:18.928 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:18.928 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:18.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.928 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:11:18.928 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:18.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.928 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:11:18.928 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:18.928 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:18.928 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:18.928 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:18.928 08:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:18.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:18.928 00:11:18.928 --- 10.0.0.2 ping statistics --- 00:11:18.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.928 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:18.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:18.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:11:18.928 00:11:18.928 --- 10.0.0.3 ping statistics --- 00:11:18.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.928 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:18.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:18.928 00:11:18.928 --- 10.0.0.1 ping statistics --- 00:11:18.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.928 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:18.928 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:18.929 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:18.929 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.929 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=67886 00:11:18.929 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 67886 00:11:18.929 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 67886 ']' 00:11:18.929 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.929 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:18.929 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.929 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:18.929 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.929 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:19.188 [2024-05-15 08:51:35.213247] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:11:19.188 [2024-05-15 08:51:35.213346] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.188 [2024-05-15 08:51:35.347054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.188 [2024-05-15 08:51:35.411153] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.189 [2024-05-15 08:51:35.411466] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.189 [2024-05-15 08:51:35.411690] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.189 [2024-05-15 08:51:35.411871] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.189 [2024-05-15 08:51:35.411971] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.189 [2024-05-15 08:51:35.412180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.189 [2024-05-15 08:51:35.412259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.189 [2024-05-15 08:51:35.412266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.448 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:19.448 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:11:19.448 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:19.448 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.448 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.448 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.448 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:19.448 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:19.707 [2024-05-15 08:51:35.835950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.707 08:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:19.966 08:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.225 [2024-05-15 08:51:36.417757] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:20.225 [2024-05-15 08:51:36.418434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.225 08:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:20.796 08:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:20.796 Malloc0 00:11:21.054 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:21.313 Delay0 00:11:21.313 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.572 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:21.830 NULL1 00:11:21.830 08:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:22.089 08:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68008 00:11:22.089 08:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:22.089 08:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:22.089 08:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.503 Read completed with error (sct=0, sc=11) 00:11:23.503 08:51:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:23.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:23.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:23.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:23.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:23.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:23.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:23.766 08:51:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:23.766 08:51:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:24.024 true 00:11:24.024 08:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:24.024 08:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.590 08:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.848 08:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:24.848 08:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:25.106 true 00:11:25.106 08:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:25.106 08:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.672 08:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.672 08:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:25.672 08:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:26.237 true 00:11:26.237 08:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:26.237 08:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.495 08:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.753 08:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:26.753 08:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:26.753 true 00:11:26.753 08:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:26.753 08:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.319 08:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.577 08:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:27.577 08:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:27.835 true 00:11:27.835 08:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:27.835 08:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.769 08:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.028 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:29.028 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:29.287 true 00:11:29.287 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:29.287 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.546 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.804 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:29.804 08:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:30.063 true 00:11:30.063 08:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:30.063 08:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.322 08:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.580 08:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:30.580 08:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:31.147 true 00:11:31.147 08:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:31.147 08:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.147 08:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.405 08:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:31.405 08:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:31.667 true 00:11:31.926 08:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:31.926 08:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.866 08:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.866 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:32.866 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:33.124 true 00:11:33.382 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:33.382 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.382 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.948 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:33.948 08:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:33.948 true 00:11:34.207 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:34.207 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.207 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.466 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:34.466 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:35.036 true 00:11:35.036 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:35.036 08:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.604 08:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.862 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:35.862 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:36.120 true 00:11:36.120 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:36.120 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.687 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.687 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:36.687 08:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:36.944 true 00:11:36.944 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:36.944 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.203 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.460 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:37.460 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:37.719 true 00:11:37.719 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:37.719 08:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.655 08:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:38.913 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:38.913 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:39.172 true 00:11:39.172 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:39.172 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.430 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.688 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:39.688 08:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:39.946 true 00:11:39.946 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:39.946 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.204 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.462 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:40.462 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:40.722 true 00:11:40.722 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:40.722 08:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.652 08:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.910 08:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:41.910 08:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:42.169 true 00:11:42.169 08:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:42.169 08:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.428 08:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.687 08:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:42.687 08:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:42.945 true 00:11:42.945 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:42.945 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.204 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.464 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:43.464 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:43.722 true 00:11:43.722 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:43.722 08:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.654 08:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.910 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:44.910 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:45.167 true 00:11:45.167 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:45.167 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.425 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.683 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:45.683 08:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:45.940 true 00:11:45.940 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:45.940 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.199 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.457 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:46.457 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:46.714 true 00:11:46.714 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:46.714 08:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.647 08:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.903 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:47.903 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:48.160 true 00:11:48.160 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:48.160 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.419 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.679 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:48.679 08:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:48.937 true 00:11:48.937 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:48.937 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.196 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.453 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:49.453 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:49.711 true 00:11:49.711 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:49.711 08:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.645 08:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.903 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:50.903 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:51.161 true 00:11:51.161 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:51.161 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.727 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.727 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:51.727 08:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:51.985 true 00:11:51.985 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:51.985 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.243 Initializing NVMe Controllers 00:11:52.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:52.243 Controller IO queue size 128, less than required. 00:11:52.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:52.243 Controller IO queue size 128, less than required. 00:11:52.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:52.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:52.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:52.243 Initialization complete. Launching workers. 00:11:52.243 ======================================================== 00:11:52.243 Latency(us) 00:11:52.243 Device Information : IOPS MiB/s Average min max 00:11:52.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 330.40 0.16 141720.70 3754.71 1054017.18 00:11:52.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7277.80 3.55 17588.83 3722.99 667363.02 00:11:52.243 ======================================================== 00:11:52.243 Total : 7608.19 3.71 22979.40 3722.99 1054017.18 00:11:52.243 00:11:52.502 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.760 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:52.760 08:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:53.018 true 00:11:53.018 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68008 00:11:53.018 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68008) - No such process 00:11:53.018 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68008 00:11:53.018 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.286 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:53.567 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:53.567 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:53.567 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:53.567 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:53.567 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:53.825 null0 00:11:53.825 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:53.825 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:53.825 08:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:54.083 null1 00:11:54.083 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.083 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.083 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:54.340 null2 00:11:54.340 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.340 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.340 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:54.598 null3 00:11:54.598 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.598 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.598 08:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:54.857 null4 00:11:54.857 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.857 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.857 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:55.115 null5 00:11:55.372 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.372 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.372 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:55.630 null6 00:11:55.630 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.630 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.630 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:55.889 null7 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.889 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69075 69076 69078 69081 69083 69084 69086 69088 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.890 08:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:56.149 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.149 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:56.149 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:56.149 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.149 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:56.149 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:56.149 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.149 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:56.408 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.408 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.408 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:56.408 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.408 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.408 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:56.408 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.408 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.408 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:56.666 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.924 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.924 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:56.924 08:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:56.924 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.924 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:56.924 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:56.924 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.181 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:57.438 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.438 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.438 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.438 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:57.438 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.438 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.438 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:57.438 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.438 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.697 08:52:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:57.955 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.955 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.955 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.955 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.955 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.955 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:57.955 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.955 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.955 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:57.955 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:57.955 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.213 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:58.471 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:58.472 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.730 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:58.988 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.988 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.988 08:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:58.988 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:59.246 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:59.246 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.246 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:59.246 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:59.246 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:59.246 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.246 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.246 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.246 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:59.504 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:59.762 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:59.762 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:59.762 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:59.762 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:59.762 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:59.762 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.762 08:52:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.020 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.021 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:00.021 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.021 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.021 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:00.279 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:00.279 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.279 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.279 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:00.279 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:00.279 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:00.279 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:00.279 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.279 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:00.279 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.537 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:00.795 08:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:00.795 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.053 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:01.311 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:01.569 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:01.569 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.569 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:01.569 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:01.569 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.569 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.569 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.569 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.569 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.827 08:52:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.827 rmmod nvme_tcp 00:12:01.827 rmmod nvme_fabrics 00:12:01.827 rmmod nvme_keyring 00:12:01.827 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.827 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:12:01.827 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:12:01.827 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 67886 ']' 00:12:01.827 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 67886 00:12:01.827 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 67886 ']' 00:12:01.827 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 67886 00:12:01.827 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:12:01.827 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:01.827 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67886 00:12:02.086 killing process with pid 67886 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67886' 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 67886 00:12:02.086 [2024-05-15 08:52:18.065922] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 67886 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:02.086 ************************************ 00:12:02.086 END TEST nvmf_ns_hotplug_stress 00:12:02.086 ************************************ 00:12:02.086 00:12:02.086 real 0m43.584s 00:12:02.086 user 3m34.350s 00:12:02.086 sys 0m12.702s 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:02.086 08:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.345 08:52:18 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:02.345 08:52:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:02.345 08:52:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.345 08:52:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:02.345 ************************************ 00:12:02.345 START TEST nvmf_connect_stress 00:12:02.345 ************************************ 00:12:02.345 08:52:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:02.345 * Looking for test storage... 00:12:02.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:02.345 08:52:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:02.345 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:02.345 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:02.346 Cannot find device "nvmf_tgt_br" 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.346 Cannot find device "nvmf_tgt_br2" 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:02.346 Cannot find device "nvmf_tgt_br" 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:02.346 Cannot find device "nvmf_tgt_br2" 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:12:02.346 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:02.605 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:02.605 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.605 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:12:02.605 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.605 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:12:02.605 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:02.605 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:02.605 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:02.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:12:02.606 00:12:02.606 --- 10.0.0.2 ping statistics --- 00:12:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.606 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:02.606 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:02.606 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:12:02.606 00:12:02.606 --- 10.0.0.3 ping statistics --- 00:12:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.606 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:02.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:12:02.606 00:12:02.606 --- 10.0.0.1 ping statistics --- 00:12:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.606 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=70386 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 70386 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 70386 ']' 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:02.606 08:52:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.864 [2024-05-15 08:52:18.860281] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:02.864 [2024-05-15 08:52:18.860398] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.864 [2024-05-15 08:52:18.992005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:02.864 [2024-05-15 08:52:19.053034] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.864 [2024-05-15 08:52:19.053086] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.864 [2024-05-15 08:52:19.053098] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.864 [2024-05-15 08:52:19.053106] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.864 [2024-05-15 08:52:19.053113] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.864 [2024-05-15 08:52:19.053219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.864 [2024-05-15 08:52:19.053990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.864 [2024-05-15 08:52:19.054035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.122 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:03.122 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:12:03.122 08:52:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:03.122 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.122 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.122 08:52:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.122 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.123 [2024-05-15 08:52:19.196328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.123 [2024-05-15 08:52:19.216248] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:03.123 [2024-05-15 08:52:19.216830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.123 NULL1 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=70430 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.123 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.689 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.689 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:03.689 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.689 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.689 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.947 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.947 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:03.947 08:52:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.947 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.947 08:52:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.205 08:52:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.205 08:52:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:04.205 08:52:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.205 08:52:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.205 08:52:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.463 08:52:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.463 08:52:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:04.463 08:52:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.463 08:52:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.463 08:52:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.721 08:52:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.721 08:52:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:04.721 08:52:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.721 08:52:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.721 08:52:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.286 08:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.286 08:52:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:05.286 08:52:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.286 08:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.286 08:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.545 08:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.545 08:52:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:05.545 08:52:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.545 08:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.545 08:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.803 08:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.803 08:52:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:05.803 08:52:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.803 08:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.803 08:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.061 08:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.061 08:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:06.061 08:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.061 08:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.061 08:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.318 08:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.318 08:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:06.318 08:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.318 08:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.318 08:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.884 08:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.884 08:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:06.884 08:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.884 08:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.884 08:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.143 08:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.143 08:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:07.143 08:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.143 08:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.143 08:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.402 08:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.402 08:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:07.402 08:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.402 08:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.402 08:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.660 08:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.660 08:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:07.660 08:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.660 08:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.660 08:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.227 08:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.227 08:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:08.227 08:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.227 08:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.227 08:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.485 08:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.485 08:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:08.485 08:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.485 08:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.485 08:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.745 08:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.745 08:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:08.745 08:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.745 08:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.745 08:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.002 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.002 08:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:09.002 08:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.002 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.002 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.260 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.261 08:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:09.261 08:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.261 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.261 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.827 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.827 08:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:09.827 08:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.827 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.827 08:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.085 08:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.085 08:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:10.085 08:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.085 08:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.085 08:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.344 08:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.344 08:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:10.344 08:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.344 08:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.344 08:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.603 08:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.603 08:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:10.603 08:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.603 08:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.603 08:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.862 08:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.862 08:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:10.862 08:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.862 08:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.862 08:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.430 08:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.430 08:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:11.430 08:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.430 08:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.430 08:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.688 08:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.688 08:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:11.688 08:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.688 08:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.688 08:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.947 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.947 08:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:11.947 08:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.947 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.947 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.206 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.206 08:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:12.206 08:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.206 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.206 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.464 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.464 08:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:12.464 08:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.464 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.464 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.031 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.031 08:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:13.031 08:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.031 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.031 08:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.289 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.289 08:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:13.289 08:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.289 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.289 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.289 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70430 00:12:13.547 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (70430) - No such process 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 70430 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.547 rmmod nvme_tcp 00:12:13.547 rmmod nvme_fabrics 00:12:13.547 rmmod nvme_keyring 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 70386 ']' 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 70386 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 70386 ']' 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 70386 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70386 00:12:13.547 killing process with pid 70386 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70386' 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 70386 00:12:13.547 [2024-05-15 08:52:29.722013] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:13.547 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 70386 00:12:13.806 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.806 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.806 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.806 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.806 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.806 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.806 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.806 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.806 08:52:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:13.806 00:12:13.806 real 0m11.585s 00:12:13.806 user 0m38.822s 00:12:13.806 sys 0m3.180s 00:12:13.806 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:13.806 ************************************ 00:12:13.806 END TEST nvmf_connect_stress 00:12:13.806 ************************************ 00:12:13.806 08:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.806 08:52:29 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:13.806 08:52:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:13.806 08:52:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:13.806 08:52:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.806 ************************************ 00:12:13.806 START TEST nvmf_fused_ordering 00:12:13.806 ************************************ 00:12:13.806 08:52:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:14.065 * Looking for test storage... 00:12:14.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:14.065 Cannot find device "nvmf_tgt_br" 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.065 Cannot find device "nvmf_tgt_br2" 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:14.065 Cannot find device "nvmf_tgt_br" 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:14.065 Cannot find device "nvmf_tgt_br2" 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:14.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:14.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:14.065 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:14.323 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:14.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:12:14.324 00:12:14.324 --- 10.0.0.2 ping statistics --- 00:12:14.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.324 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:14.324 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:14.324 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:14.324 00:12:14.324 --- 10.0.0.3 ping statistics --- 00:12:14.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.324 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:14.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:14.324 00:12:14.324 --- 10.0.0.1 ping statistics --- 00:12:14.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.324 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=70754 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 70754 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 70754 ']' 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:14.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:14.324 08:52:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:14.324 [2024-05-15 08:52:30.504267] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:14.324 [2024-05-15 08:52:30.504388] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.582 [2024-05-15 08:52:30.642302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.582 [2024-05-15 08:52:30.705194] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.582 [2024-05-15 08:52:30.705263] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.582 [2024-05-15 08:52:30.705282] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.582 [2024-05-15 08:52:30.705295] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.582 [2024-05-15 08:52:30.705306] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.582 [2024-05-15 08:52:30.705340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:15.520 [2024-05-15 08:52:31.528545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:15.520 [2024-05-15 08:52:31.544472] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:15.520 [2024-05-15 08:52:31.544765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:15.520 NULL1 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.520 08:52:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:15.520 [2024-05-15 08:52:31.596081] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:15.520 [2024-05-15 08:52:31.596122] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70804 ] 00:12:16.088 Attached to nqn.2016-06.io.spdk:cnode1 00:12:16.088 Namespace ID: 1 size: 1GB 00:12:16.088 fused_ordering(0) 00:12:16.088 fused_ordering(1) 00:12:16.088 fused_ordering(2) 00:12:16.088 fused_ordering(3) 00:12:16.088 fused_ordering(4) 00:12:16.088 fused_ordering(5) 00:12:16.088 fused_ordering(6) 00:12:16.088 fused_ordering(7) 00:12:16.088 fused_ordering(8) 00:12:16.088 fused_ordering(9) 00:12:16.088 fused_ordering(10) 00:12:16.088 fused_ordering(11) 00:12:16.088 fused_ordering(12) 00:12:16.088 fused_ordering(13) 00:12:16.088 fused_ordering(14) 00:12:16.088 fused_ordering(15) 00:12:16.088 fused_ordering(16) 00:12:16.088 fused_ordering(17) 00:12:16.088 fused_ordering(18) 00:12:16.088 fused_ordering(19) 00:12:16.088 fused_ordering(20) 00:12:16.088 fused_ordering(21) 00:12:16.088 fused_ordering(22) 00:12:16.088 fused_ordering(23) 00:12:16.088 fused_ordering(24) 00:12:16.088 fused_ordering(25) 00:12:16.088 fused_ordering(26) 00:12:16.088 fused_ordering(27) 00:12:16.088 fused_ordering(28) 00:12:16.089 fused_ordering(29) 00:12:16.089 fused_ordering(30) 00:12:16.089 fused_ordering(31) 00:12:16.089 fused_ordering(32) 00:12:16.089 fused_ordering(33) 00:12:16.089 fused_ordering(34) 00:12:16.089 fused_ordering(35) 00:12:16.089 fused_ordering(36) 00:12:16.089 fused_ordering(37) 00:12:16.089 fused_ordering(38) 00:12:16.089 fused_ordering(39) 00:12:16.089 fused_ordering(40) 00:12:16.089 fused_ordering(41) 00:12:16.089 fused_ordering(42) 00:12:16.089 fused_ordering(43) 00:12:16.089 fused_ordering(44) 00:12:16.089 fused_ordering(45) 00:12:16.089 fused_ordering(46) 00:12:16.089 fused_ordering(47) 00:12:16.089 fused_ordering(48) 00:12:16.089 fused_ordering(49) 00:12:16.089 fused_ordering(50) 00:12:16.089 fused_ordering(51) 00:12:16.089 fused_ordering(52) 00:12:16.089 fused_ordering(53) 00:12:16.089 fused_ordering(54) 00:12:16.089 fused_ordering(55) 00:12:16.089 fused_ordering(56) 00:12:16.089 fused_ordering(57) 00:12:16.089 fused_ordering(58) 00:12:16.089 fused_ordering(59) 00:12:16.089 fused_ordering(60) 00:12:16.089 fused_ordering(61) 00:12:16.089 fused_ordering(62) 00:12:16.089 fused_ordering(63) 00:12:16.089 fused_ordering(64) 00:12:16.089 fused_ordering(65) 00:12:16.089 fused_ordering(66) 00:12:16.089 fused_ordering(67) 00:12:16.089 fused_ordering(68) 00:12:16.089 fused_ordering(69) 00:12:16.089 fused_ordering(70) 00:12:16.089 fused_ordering(71) 00:12:16.089 fused_ordering(72) 00:12:16.089 fused_ordering(73) 00:12:16.089 fused_ordering(74) 00:12:16.089 fused_ordering(75) 00:12:16.089 fused_ordering(76) 00:12:16.089 fused_ordering(77) 00:12:16.089 fused_ordering(78) 00:12:16.089 fused_ordering(79) 00:12:16.089 fused_ordering(80) 00:12:16.089 fused_ordering(81) 00:12:16.089 fused_ordering(82) 00:12:16.089 fused_ordering(83) 00:12:16.089 fused_ordering(84) 00:12:16.089 fused_ordering(85) 00:12:16.089 fused_ordering(86) 00:12:16.089 fused_ordering(87) 00:12:16.089 fused_ordering(88) 00:12:16.089 fused_ordering(89) 00:12:16.089 fused_ordering(90) 00:12:16.089 fused_ordering(91) 00:12:16.089 fused_ordering(92) 00:12:16.089 fused_ordering(93) 00:12:16.089 fused_ordering(94) 00:12:16.089 fused_ordering(95) 00:12:16.089 fused_ordering(96) 00:12:16.089 fused_ordering(97) 00:12:16.089 fused_ordering(98) 00:12:16.089 fused_ordering(99) 00:12:16.089 fused_ordering(100) 00:12:16.089 fused_ordering(101) 00:12:16.089 fused_ordering(102) 00:12:16.089 fused_ordering(103) 00:12:16.089 fused_ordering(104) 00:12:16.089 fused_ordering(105) 00:12:16.089 fused_ordering(106) 00:12:16.089 fused_ordering(107) 00:12:16.089 fused_ordering(108) 00:12:16.089 fused_ordering(109) 00:12:16.089 fused_ordering(110) 00:12:16.089 fused_ordering(111) 00:12:16.089 fused_ordering(112) 00:12:16.089 fused_ordering(113) 00:12:16.089 fused_ordering(114) 00:12:16.089 fused_ordering(115) 00:12:16.089 fused_ordering(116) 00:12:16.089 fused_ordering(117) 00:12:16.089 fused_ordering(118) 00:12:16.089 fused_ordering(119) 00:12:16.089 fused_ordering(120) 00:12:16.089 fused_ordering(121) 00:12:16.089 fused_ordering(122) 00:12:16.089 fused_ordering(123) 00:12:16.089 fused_ordering(124) 00:12:16.089 fused_ordering(125) 00:12:16.089 fused_ordering(126) 00:12:16.089 fused_ordering(127) 00:12:16.089 fused_ordering(128) 00:12:16.089 fused_ordering(129) 00:12:16.089 fused_ordering(130) 00:12:16.089 fused_ordering(131) 00:12:16.089 fused_ordering(132) 00:12:16.089 fused_ordering(133) 00:12:16.089 fused_ordering(134) 00:12:16.089 fused_ordering(135) 00:12:16.089 fused_ordering(136) 00:12:16.089 fused_ordering(137) 00:12:16.089 fused_ordering(138) 00:12:16.089 fused_ordering(139) 00:12:16.089 fused_ordering(140) 00:12:16.089 fused_ordering(141) 00:12:16.089 fused_ordering(142) 00:12:16.089 fused_ordering(143) 00:12:16.089 fused_ordering(144) 00:12:16.089 fused_ordering(145) 00:12:16.089 fused_ordering(146) 00:12:16.089 fused_ordering(147) 00:12:16.089 fused_ordering(148) 00:12:16.089 fused_ordering(149) 00:12:16.089 fused_ordering(150) 00:12:16.089 fused_ordering(151) 00:12:16.089 fused_ordering(152) 00:12:16.089 fused_ordering(153) 00:12:16.089 fused_ordering(154) 00:12:16.089 fused_ordering(155) 00:12:16.089 fused_ordering(156) 00:12:16.089 fused_ordering(157) 00:12:16.089 fused_ordering(158) 00:12:16.089 fused_ordering(159) 00:12:16.089 fused_ordering(160) 00:12:16.089 fused_ordering(161) 00:12:16.089 fused_ordering(162) 00:12:16.089 fused_ordering(163) 00:12:16.089 fused_ordering(164) 00:12:16.089 fused_ordering(165) 00:12:16.089 fused_ordering(166) 00:12:16.089 fused_ordering(167) 00:12:16.089 fused_ordering(168) 00:12:16.089 fused_ordering(169) 00:12:16.089 fused_ordering(170) 00:12:16.089 fused_ordering(171) 00:12:16.089 fused_ordering(172) 00:12:16.089 fused_ordering(173) 00:12:16.089 fused_ordering(174) 00:12:16.089 fused_ordering(175) 00:12:16.089 fused_ordering(176) 00:12:16.089 fused_ordering(177) 00:12:16.089 fused_ordering(178) 00:12:16.089 fused_ordering(179) 00:12:16.089 fused_ordering(180) 00:12:16.089 fused_ordering(181) 00:12:16.089 fused_ordering(182) 00:12:16.089 fused_ordering(183) 00:12:16.089 fused_ordering(184) 00:12:16.089 fused_ordering(185) 00:12:16.089 fused_ordering(186) 00:12:16.089 fused_ordering(187) 00:12:16.089 fused_ordering(188) 00:12:16.089 fused_ordering(189) 00:12:16.089 fused_ordering(190) 00:12:16.089 fused_ordering(191) 00:12:16.089 fused_ordering(192) 00:12:16.089 fused_ordering(193) 00:12:16.089 fused_ordering(194) 00:12:16.089 fused_ordering(195) 00:12:16.089 fused_ordering(196) 00:12:16.089 fused_ordering(197) 00:12:16.089 fused_ordering(198) 00:12:16.089 fused_ordering(199) 00:12:16.089 fused_ordering(200) 00:12:16.089 fused_ordering(201) 00:12:16.089 fused_ordering(202) 00:12:16.089 fused_ordering(203) 00:12:16.089 fused_ordering(204) 00:12:16.089 fused_ordering(205) 00:12:16.089 fused_ordering(206) 00:12:16.089 fused_ordering(207) 00:12:16.089 fused_ordering(208) 00:12:16.089 fused_ordering(209) 00:12:16.089 fused_ordering(210) 00:12:16.089 fused_ordering(211) 00:12:16.089 fused_ordering(212) 00:12:16.089 fused_ordering(213) 00:12:16.089 fused_ordering(214) 00:12:16.089 fused_ordering(215) 00:12:16.089 fused_ordering(216) 00:12:16.089 fused_ordering(217) 00:12:16.089 fused_ordering(218) 00:12:16.089 fused_ordering(219) 00:12:16.089 fused_ordering(220) 00:12:16.089 fused_ordering(221) 00:12:16.089 fused_ordering(222) 00:12:16.089 fused_ordering(223) 00:12:16.089 fused_ordering(224) 00:12:16.089 fused_ordering(225) 00:12:16.089 fused_ordering(226) 00:12:16.089 fused_ordering(227) 00:12:16.089 fused_ordering(228) 00:12:16.089 fused_ordering(229) 00:12:16.089 fused_ordering(230) 00:12:16.089 fused_ordering(231) 00:12:16.089 fused_ordering(232) 00:12:16.089 fused_ordering(233) 00:12:16.089 fused_ordering(234) 00:12:16.089 fused_ordering(235) 00:12:16.089 fused_ordering(236) 00:12:16.089 fused_ordering(237) 00:12:16.089 fused_ordering(238) 00:12:16.089 fused_ordering(239) 00:12:16.089 fused_ordering(240) 00:12:16.089 fused_ordering(241) 00:12:16.089 fused_ordering(242) 00:12:16.089 fused_ordering(243) 00:12:16.089 fused_ordering(244) 00:12:16.089 fused_ordering(245) 00:12:16.089 fused_ordering(246) 00:12:16.089 fused_ordering(247) 00:12:16.089 fused_ordering(248) 00:12:16.089 fused_ordering(249) 00:12:16.089 fused_ordering(250) 00:12:16.089 fused_ordering(251) 00:12:16.089 fused_ordering(252) 00:12:16.089 fused_ordering(253) 00:12:16.089 fused_ordering(254) 00:12:16.089 fused_ordering(255) 00:12:16.089 fused_ordering(256) 00:12:16.089 fused_ordering(257) 00:12:16.089 fused_ordering(258) 00:12:16.089 fused_ordering(259) 00:12:16.089 fused_ordering(260) 00:12:16.089 fused_ordering(261) 00:12:16.089 fused_ordering(262) 00:12:16.089 fused_ordering(263) 00:12:16.089 fused_ordering(264) 00:12:16.089 fused_ordering(265) 00:12:16.089 fused_ordering(266) 00:12:16.089 fused_ordering(267) 00:12:16.089 fused_ordering(268) 00:12:16.089 fused_ordering(269) 00:12:16.089 fused_ordering(270) 00:12:16.089 fused_ordering(271) 00:12:16.089 fused_ordering(272) 00:12:16.089 fused_ordering(273) 00:12:16.089 fused_ordering(274) 00:12:16.089 fused_ordering(275) 00:12:16.089 fused_ordering(276) 00:12:16.089 fused_ordering(277) 00:12:16.089 fused_ordering(278) 00:12:16.089 fused_ordering(279) 00:12:16.089 fused_ordering(280) 00:12:16.089 fused_ordering(281) 00:12:16.089 fused_ordering(282) 00:12:16.089 fused_ordering(283) 00:12:16.089 fused_ordering(284) 00:12:16.089 fused_ordering(285) 00:12:16.089 fused_ordering(286) 00:12:16.089 fused_ordering(287) 00:12:16.089 fused_ordering(288) 00:12:16.089 fused_ordering(289) 00:12:16.089 fused_ordering(290) 00:12:16.089 fused_ordering(291) 00:12:16.089 fused_ordering(292) 00:12:16.089 fused_ordering(293) 00:12:16.089 fused_ordering(294) 00:12:16.089 fused_ordering(295) 00:12:16.089 fused_ordering(296) 00:12:16.089 fused_ordering(297) 00:12:16.089 fused_ordering(298) 00:12:16.089 fused_ordering(299) 00:12:16.089 fused_ordering(300) 00:12:16.089 fused_ordering(301) 00:12:16.089 fused_ordering(302) 00:12:16.089 fused_ordering(303) 00:12:16.089 fused_ordering(304) 00:12:16.089 fused_ordering(305) 00:12:16.089 fused_ordering(306) 00:12:16.089 fused_ordering(307) 00:12:16.089 fused_ordering(308) 00:12:16.089 fused_ordering(309) 00:12:16.089 fused_ordering(310) 00:12:16.089 fused_ordering(311) 00:12:16.089 fused_ordering(312) 00:12:16.089 fused_ordering(313) 00:12:16.089 fused_ordering(314) 00:12:16.089 fused_ordering(315) 00:12:16.089 fused_ordering(316) 00:12:16.089 fused_ordering(317) 00:12:16.090 fused_ordering(318) 00:12:16.090 fused_ordering(319) 00:12:16.090 fused_ordering(320) 00:12:16.090 fused_ordering(321) 00:12:16.090 fused_ordering(322) 00:12:16.090 fused_ordering(323) 00:12:16.090 fused_ordering(324) 00:12:16.090 fused_ordering(325) 00:12:16.090 fused_ordering(326) 00:12:16.090 fused_ordering(327) 00:12:16.090 fused_ordering(328) 00:12:16.090 fused_ordering(329) 00:12:16.090 fused_ordering(330) 00:12:16.090 fused_ordering(331) 00:12:16.090 fused_ordering(332) 00:12:16.090 fused_ordering(333) 00:12:16.090 fused_ordering(334) 00:12:16.090 fused_ordering(335) 00:12:16.090 fused_ordering(336) 00:12:16.090 fused_ordering(337) 00:12:16.090 fused_ordering(338) 00:12:16.090 fused_ordering(339) 00:12:16.090 fused_ordering(340) 00:12:16.090 fused_ordering(341) 00:12:16.090 fused_ordering(342) 00:12:16.090 fused_ordering(343) 00:12:16.090 fused_ordering(344) 00:12:16.090 fused_ordering(345) 00:12:16.090 fused_ordering(346) 00:12:16.090 fused_ordering(347) 00:12:16.090 fused_ordering(348) 00:12:16.090 fused_ordering(349) 00:12:16.090 fused_ordering(350) 00:12:16.090 fused_ordering(351) 00:12:16.090 fused_ordering(352) 00:12:16.090 fused_ordering(353) 00:12:16.090 fused_ordering(354) 00:12:16.090 fused_ordering(355) 00:12:16.090 fused_ordering(356) 00:12:16.090 fused_ordering(357) 00:12:16.090 fused_ordering(358) 00:12:16.090 fused_ordering(359) 00:12:16.090 fused_ordering(360) 00:12:16.090 fused_ordering(361) 00:12:16.090 fused_ordering(362) 00:12:16.090 fused_ordering(363) 00:12:16.090 fused_ordering(364) 00:12:16.090 fused_ordering(365) 00:12:16.090 fused_ordering(366) 00:12:16.090 fused_ordering(367) 00:12:16.090 fused_ordering(368) 00:12:16.090 fused_ordering(369) 00:12:16.090 fused_ordering(370) 00:12:16.090 fused_ordering(371) 00:12:16.090 fused_ordering(372) 00:12:16.090 fused_ordering(373) 00:12:16.090 fused_ordering(374) 00:12:16.090 fused_ordering(375) 00:12:16.090 fused_ordering(376) 00:12:16.090 fused_ordering(377) 00:12:16.090 fused_ordering(378) 00:12:16.090 fused_ordering(379) 00:12:16.090 fused_ordering(380) 00:12:16.090 fused_ordering(381) 00:12:16.090 fused_ordering(382) 00:12:16.090 fused_ordering(383) 00:12:16.090 fused_ordering(384) 00:12:16.090 fused_ordering(385) 00:12:16.090 fused_ordering(386) 00:12:16.090 fused_ordering(387) 00:12:16.090 fused_ordering(388) 00:12:16.090 fused_ordering(389) 00:12:16.090 fused_ordering(390) 00:12:16.090 fused_ordering(391) 00:12:16.090 fused_ordering(392) 00:12:16.090 fused_ordering(393) 00:12:16.090 fused_ordering(394) 00:12:16.090 fused_ordering(395) 00:12:16.090 fused_ordering(396) 00:12:16.090 fused_ordering(397) 00:12:16.090 fused_ordering(398) 00:12:16.090 fused_ordering(399) 00:12:16.090 fused_ordering(400) 00:12:16.090 fused_ordering(401) 00:12:16.090 fused_ordering(402) 00:12:16.090 fused_ordering(403) 00:12:16.090 fused_ordering(404) 00:12:16.090 fused_ordering(405) 00:12:16.090 fused_ordering(406) 00:12:16.090 fused_ordering(407) 00:12:16.090 fused_ordering(408) 00:12:16.090 fused_ordering(409) 00:12:16.090 fused_ordering(410) 00:12:16.657 fused_ordering(411) 00:12:16.657 fused_ordering(412) 00:12:16.657 fused_ordering(413) 00:12:16.657 fused_ordering(414) 00:12:16.657 fused_ordering(415) 00:12:16.657 fused_ordering(416) 00:12:16.657 fused_ordering(417) 00:12:16.657 fused_ordering(418) 00:12:16.657 fused_ordering(419) 00:12:16.657 fused_ordering(420) 00:12:16.657 fused_ordering(421) 00:12:16.657 fused_ordering(422) 00:12:16.657 fused_ordering(423) 00:12:16.657 fused_ordering(424) 00:12:16.657 fused_ordering(425) 00:12:16.657 fused_ordering(426) 00:12:16.657 fused_ordering(427) 00:12:16.657 fused_ordering(428) 00:12:16.657 fused_ordering(429) 00:12:16.657 fused_ordering(430) 00:12:16.657 fused_ordering(431) 00:12:16.657 fused_ordering(432) 00:12:16.657 fused_ordering(433) 00:12:16.657 fused_ordering(434) 00:12:16.657 fused_ordering(435) 00:12:16.657 fused_ordering(436) 00:12:16.657 fused_ordering(437) 00:12:16.657 fused_ordering(438) 00:12:16.657 fused_ordering(439) 00:12:16.657 fused_ordering(440) 00:12:16.657 fused_ordering(441) 00:12:16.657 fused_ordering(442) 00:12:16.657 fused_ordering(443) 00:12:16.657 fused_ordering(444) 00:12:16.657 fused_ordering(445) 00:12:16.657 fused_ordering(446) 00:12:16.657 fused_ordering(447) 00:12:16.657 fused_ordering(448) 00:12:16.657 fused_ordering(449) 00:12:16.657 fused_ordering(450) 00:12:16.657 fused_ordering(451) 00:12:16.657 fused_ordering(452) 00:12:16.657 fused_ordering(453) 00:12:16.657 fused_ordering(454) 00:12:16.657 fused_ordering(455) 00:12:16.657 fused_ordering(456) 00:12:16.657 fused_ordering(457) 00:12:16.657 fused_ordering(458) 00:12:16.657 fused_ordering(459) 00:12:16.657 fused_ordering(460) 00:12:16.657 fused_ordering(461) 00:12:16.657 fused_ordering(462) 00:12:16.657 fused_ordering(463) 00:12:16.657 fused_ordering(464) 00:12:16.657 fused_ordering(465) 00:12:16.657 fused_ordering(466) 00:12:16.657 fused_ordering(467) 00:12:16.657 fused_ordering(468) 00:12:16.657 fused_ordering(469) 00:12:16.657 fused_ordering(470) 00:12:16.657 fused_ordering(471) 00:12:16.658 fused_ordering(472) 00:12:16.658 fused_ordering(473) 00:12:16.658 fused_ordering(474) 00:12:16.658 fused_ordering(475) 00:12:16.658 fused_ordering(476) 00:12:16.658 fused_ordering(477) 00:12:16.658 fused_ordering(478) 00:12:16.658 fused_ordering(479) 00:12:16.658 fused_ordering(480) 00:12:16.658 fused_ordering(481) 00:12:16.658 fused_ordering(482) 00:12:16.658 fused_ordering(483) 00:12:16.658 fused_ordering(484) 00:12:16.658 fused_ordering(485) 00:12:16.658 fused_ordering(486) 00:12:16.658 fused_ordering(487) 00:12:16.658 fused_ordering(488) 00:12:16.658 fused_ordering(489) 00:12:16.658 fused_ordering(490) 00:12:16.658 fused_ordering(491) 00:12:16.658 fused_ordering(492) 00:12:16.658 fused_ordering(493) 00:12:16.658 fused_ordering(494) 00:12:16.658 fused_ordering(495) 00:12:16.658 fused_ordering(496) 00:12:16.658 fused_ordering(497) 00:12:16.658 fused_ordering(498) 00:12:16.658 fused_ordering(499) 00:12:16.658 fused_ordering(500) 00:12:16.658 fused_ordering(501) 00:12:16.658 fused_ordering(502) 00:12:16.658 fused_ordering(503) 00:12:16.658 fused_ordering(504) 00:12:16.658 fused_ordering(505) 00:12:16.658 fused_ordering(506) 00:12:16.658 fused_ordering(507) 00:12:16.658 fused_ordering(508) 00:12:16.658 fused_ordering(509) 00:12:16.658 fused_ordering(510) 00:12:16.658 fused_ordering(511) 00:12:16.658 fused_ordering(512) 00:12:16.658 fused_ordering(513) 00:12:16.658 fused_ordering(514) 00:12:16.658 fused_ordering(515) 00:12:16.658 fused_ordering(516) 00:12:16.658 fused_ordering(517) 00:12:16.658 fused_ordering(518) 00:12:16.658 fused_ordering(519) 00:12:16.658 fused_ordering(520) 00:12:16.658 fused_ordering(521) 00:12:16.658 fused_ordering(522) 00:12:16.658 fused_ordering(523) 00:12:16.658 fused_ordering(524) 00:12:16.658 fused_ordering(525) 00:12:16.658 fused_ordering(526) 00:12:16.658 fused_ordering(527) 00:12:16.658 fused_ordering(528) 00:12:16.658 fused_ordering(529) 00:12:16.658 fused_ordering(530) 00:12:16.658 fused_ordering(531) 00:12:16.658 fused_ordering(532) 00:12:16.658 fused_ordering(533) 00:12:16.658 fused_ordering(534) 00:12:16.658 fused_ordering(535) 00:12:16.658 fused_ordering(536) 00:12:16.658 fused_ordering(537) 00:12:16.658 fused_ordering(538) 00:12:16.658 fused_ordering(539) 00:12:16.658 fused_ordering(540) 00:12:16.658 fused_ordering(541) 00:12:16.658 fused_ordering(542) 00:12:16.658 fused_ordering(543) 00:12:16.658 fused_ordering(544) 00:12:16.658 fused_ordering(545) 00:12:16.658 fused_ordering(546) 00:12:16.658 fused_ordering(547) 00:12:16.658 fused_ordering(548) 00:12:16.658 fused_ordering(549) 00:12:16.658 fused_ordering(550) 00:12:16.658 fused_ordering(551) 00:12:16.658 fused_ordering(552) 00:12:16.658 fused_ordering(553) 00:12:16.658 fused_ordering(554) 00:12:16.658 fused_ordering(555) 00:12:16.658 fused_ordering(556) 00:12:16.658 fused_ordering(557) 00:12:16.658 fused_ordering(558) 00:12:16.658 fused_ordering(559) 00:12:16.658 fused_ordering(560) 00:12:16.658 fused_ordering(561) 00:12:16.658 fused_ordering(562) 00:12:16.658 fused_ordering(563) 00:12:16.658 fused_ordering(564) 00:12:16.658 fused_ordering(565) 00:12:16.658 fused_ordering(566) 00:12:16.658 fused_ordering(567) 00:12:16.658 fused_ordering(568) 00:12:16.658 fused_ordering(569) 00:12:16.658 fused_ordering(570) 00:12:16.658 fused_ordering(571) 00:12:16.658 fused_ordering(572) 00:12:16.658 fused_ordering(573) 00:12:16.658 fused_ordering(574) 00:12:16.658 fused_ordering(575) 00:12:16.658 fused_ordering(576) 00:12:16.658 fused_ordering(577) 00:12:16.658 fused_ordering(578) 00:12:16.658 fused_ordering(579) 00:12:16.658 fused_ordering(580) 00:12:16.658 fused_ordering(581) 00:12:16.658 fused_ordering(582) 00:12:16.658 fused_ordering(583) 00:12:16.658 fused_ordering(584) 00:12:16.658 fused_ordering(585) 00:12:16.658 fused_ordering(586) 00:12:16.658 fused_ordering(587) 00:12:16.658 fused_ordering(588) 00:12:16.658 fused_ordering(589) 00:12:16.658 fused_ordering(590) 00:12:16.658 fused_ordering(591) 00:12:16.658 fused_ordering(592) 00:12:16.658 fused_ordering(593) 00:12:16.658 fused_ordering(594) 00:12:16.658 fused_ordering(595) 00:12:16.658 fused_ordering(596) 00:12:16.658 fused_ordering(597) 00:12:16.658 fused_ordering(598) 00:12:16.658 fused_ordering(599) 00:12:16.658 fused_ordering(600) 00:12:16.658 fused_ordering(601) 00:12:16.658 fused_ordering(602) 00:12:16.658 fused_ordering(603) 00:12:16.658 fused_ordering(604) 00:12:16.658 fused_ordering(605) 00:12:16.658 fused_ordering(606) 00:12:16.658 fused_ordering(607) 00:12:16.658 fused_ordering(608) 00:12:16.658 fused_ordering(609) 00:12:16.658 fused_ordering(610) 00:12:16.658 fused_ordering(611) 00:12:16.658 fused_ordering(612) 00:12:16.658 fused_ordering(613) 00:12:16.658 fused_ordering(614) 00:12:16.658 fused_ordering(615) 00:12:17.225 fused_ordering(616) 00:12:17.225 fused_ordering(617) 00:12:17.225 fused_ordering(618) 00:12:17.225 fused_ordering(619) 00:12:17.225 fused_ordering(620) 00:12:17.225 fused_ordering(621) 00:12:17.225 fused_ordering(622) 00:12:17.225 fused_ordering(623) 00:12:17.225 fused_ordering(624) 00:12:17.225 fused_ordering(625) 00:12:17.225 fused_ordering(626) 00:12:17.225 fused_ordering(627) 00:12:17.225 fused_ordering(628) 00:12:17.225 fused_ordering(629) 00:12:17.225 fused_ordering(630) 00:12:17.225 fused_ordering(631) 00:12:17.225 fused_ordering(632) 00:12:17.225 fused_ordering(633) 00:12:17.225 fused_ordering(634) 00:12:17.225 fused_ordering(635) 00:12:17.225 fused_ordering(636) 00:12:17.225 fused_ordering(637) 00:12:17.225 fused_ordering(638) 00:12:17.225 fused_ordering(639) 00:12:17.225 fused_ordering(640) 00:12:17.225 fused_ordering(641) 00:12:17.225 fused_ordering(642) 00:12:17.225 fused_ordering(643) 00:12:17.225 fused_ordering(644) 00:12:17.225 fused_ordering(645) 00:12:17.225 fused_ordering(646) 00:12:17.225 fused_ordering(647) 00:12:17.225 fused_ordering(648) 00:12:17.225 fused_ordering(649) 00:12:17.225 fused_ordering(650) 00:12:17.225 fused_ordering(651) 00:12:17.225 fused_ordering(652) 00:12:17.225 fused_ordering(653) 00:12:17.225 fused_ordering(654) 00:12:17.225 fused_ordering(655) 00:12:17.225 fused_ordering(656) 00:12:17.225 fused_ordering(657) 00:12:17.225 fused_ordering(658) 00:12:17.225 fused_ordering(659) 00:12:17.225 fused_ordering(660) 00:12:17.225 fused_ordering(661) 00:12:17.225 fused_ordering(662) 00:12:17.225 fused_ordering(663) 00:12:17.225 fused_ordering(664) 00:12:17.225 fused_ordering(665) 00:12:17.225 fused_ordering(666) 00:12:17.225 fused_ordering(667) 00:12:17.225 fused_ordering(668) 00:12:17.225 fused_ordering(669) 00:12:17.225 fused_ordering(670) 00:12:17.225 fused_ordering(671) 00:12:17.225 fused_ordering(672) 00:12:17.225 fused_ordering(673) 00:12:17.225 fused_ordering(674) 00:12:17.225 fused_ordering(675) 00:12:17.225 fused_ordering(676) 00:12:17.225 fused_ordering(677) 00:12:17.225 fused_ordering(678) 00:12:17.225 fused_ordering(679) 00:12:17.225 fused_ordering(680) 00:12:17.225 fused_ordering(681) 00:12:17.225 fused_ordering(682) 00:12:17.225 fused_ordering(683) 00:12:17.225 fused_ordering(684) 00:12:17.225 fused_ordering(685) 00:12:17.225 fused_ordering(686) 00:12:17.225 fused_ordering(687) 00:12:17.225 fused_ordering(688) 00:12:17.225 fused_ordering(689) 00:12:17.225 fused_ordering(690) 00:12:17.225 fused_ordering(691) 00:12:17.225 fused_ordering(692) 00:12:17.225 fused_ordering(693) 00:12:17.225 fused_ordering(694) 00:12:17.225 fused_ordering(695) 00:12:17.225 fused_ordering(696) 00:12:17.225 fused_ordering(697) 00:12:17.225 fused_ordering(698) 00:12:17.225 fused_ordering(699) 00:12:17.225 fused_ordering(700) 00:12:17.225 fused_ordering(701) 00:12:17.225 fused_ordering(702) 00:12:17.225 fused_ordering(703) 00:12:17.225 fused_ordering(704) 00:12:17.225 fused_ordering(705) 00:12:17.225 fused_ordering(706) 00:12:17.225 fused_ordering(707) 00:12:17.225 fused_ordering(708) 00:12:17.225 fused_ordering(709) 00:12:17.225 fused_ordering(710) 00:12:17.225 fused_ordering(711) 00:12:17.225 fused_ordering(712) 00:12:17.225 fused_ordering(713) 00:12:17.225 fused_ordering(714) 00:12:17.225 fused_ordering(715) 00:12:17.225 fused_ordering(716) 00:12:17.225 fused_ordering(717) 00:12:17.225 fused_ordering(718) 00:12:17.225 fused_ordering(719) 00:12:17.225 fused_ordering(720) 00:12:17.225 fused_ordering(721) 00:12:17.225 fused_ordering(722) 00:12:17.225 fused_ordering(723) 00:12:17.226 fused_ordering(724) 00:12:17.226 fused_ordering(725) 00:12:17.226 fused_ordering(726) 00:12:17.226 fused_ordering(727) 00:12:17.226 fused_ordering(728) 00:12:17.226 fused_ordering(729) 00:12:17.226 fused_ordering(730) 00:12:17.226 fused_ordering(731) 00:12:17.226 fused_ordering(732) 00:12:17.226 fused_ordering(733) 00:12:17.226 fused_ordering(734) 00:12:17.226 fused_ordering(735) 00:12:17.226 fused_ordering(736) 00:12:17.226 fused_ordering(737) 00:12:17.226 fused_ordering(738) 00:12:17.226 fused_ordering(739) 00:12:17.226 fused_ordering(740) 00:12:17.226 fused_ordering(741) 00:12:17.226 fused_ordering(742) 00:12:17.226 fused_ordering(743) 00:12:17.226 fused_ordering(744) 00:12:17.226 fused_ordering(745) 00:12:17.226 fused_ordering(746) 00:12:17.226 fused_ordering(747) 00:12:17.226 fused_ordering(748) 00:12:17.226 fused_ordering(749) 00:12:17.226 fused_ordering(750) 00:12:17.226 fused_ordering(751) 00:12:17.226 fused_ordering(752) 00:12:17.226 fused_ordering(753) 00:12:17.226 fused_ordering(754) 00:12:17.226 fused_ordering(755) 00:12:17.226 fused_ordering(756) 00:12:17.226 fused_ordering(757) 00:12:17.226 fused_ordering(758) 00:12:17.226 fused_ordering(759) 00:12:17.226 fused_ordering(760) 00:12:17.226 fused_ordering(761) 00:12:17.226 fused_ordering(762) 00:12:17.226 fused_ordering(763) 00:12:17.226 fused_ordering(764) 00:12:17.226 fused_ordering(765) 00:12:17.226 fused_ordering(766) 00:12:17.226 fused_ordering(767) 00:12:17.226 fused_ordering(768) 00:12:17.226 fused_ordering(769) 00:12:17.226 fused_ordering(770) 00:12:17.226 fused_ordering(771) 00:12:17.226 fused_ordering(772) 00:12:17.226 fused_ordering(773) 00:12:17.226 fused_ordering(774) 00:12:17.226 fused_ordering(775) 00:12:17.226 fused_ordering(776) 00:12:17.226 fused_ordering(777) 00:12:17.226 fused_ordering(778) 00:12:17.226 fused_ordering(779) 00:12:17.226 fused_ordering(780) 00:12:17.226 fused_ordering(781) 00:12:17.226 fused_ordering(782) 00:12:17.226 fused_ordering(783) 00:12:17.226 fused_ordering(784) 00:12:17.226 fused_ordering(785) 00:12:17.226 fused_ordering(786) 00:12:17.226 fused_ordering(787) 00:12:17.226 fused_ordering(788) 00:12:17.226 fused_ordering(789) 00:12:17.226 fused_ordering(790) 00:12:17.226 fused_ordering(791) 00:12:17.226 fused_ordering(792) 00:12:17.226 fused_ordering(793) 00:12:17.226 fused_ordering(794) 00:12:17.226 fused_ordering(795) 00:12:17.226 fused_ordering(796) 00:12:17.226 fused_ordering(797) 00:12:17.226 fused_ordering(798) 00:12:17.226 fused_ordering(799) 00:12:17.226 fused_ordering(800) 00:12:17.226 fused_ordering(801) 00:12:17.226 fused_ordering(802) 00:12:17.226 fused_ordering(803) 00:12:17.226 fused_ordering(804) 00:12:17.226 fused_ordering(805) 00:12:17.226 fused_ordering(806) 00:12:17.226 fused_ordering(807) 00:12:17.226 fused_ordering(808) 00:12:17.226 fused_ordering(809) 00:12:17.226 fused_ordering(810) 00:12:17.226 fused_ordering(811) 00:12:17.226 fused_ordering(812) 00:12:17.226 fused_ordering(813) 00:12:17.226 fused_ordering(814) 00:12:17.226 fused_ordering(815) 00:12:17.226 fused_ordering(816) 00:12:17.226 fused_ordering(817) 00:12:17.226 fused_ordering(818) 00:12:17.226 fused_ordering(819) 00:12:17.226 fused_ordering(820) 00:12:17.819 fused_ordering(821) 00:12:17.819 fused_ordering(822) 00:12:17.819 fused_ordering(823) 00:12:17.819 fused_ordering(824) 00:12:17.819 fused_ordering(825) 00:12:17.819 fused_ordering(826) 00:12:17.819 fused_ordering(827) 00:12:17.819 fused_ordering(828) 00:12:17.819 fused_ordering(829) 00:12:17.819 fused_ordering(830) 00:12:17.819 fused_ordering(831) 00:12:17.819 fused_ordering(832) 00:12:17.819 fused_ordering(833) 00:12:17.819 fused_ordering(834) 00:12:17.819 fused_ordering(835) 00:12:17.819 fused_ordering(836) 00:12:17.819 fused_ordering(837) 00:12:17.819 fused_ordering(838) 00:12:17.819 fused_ordering(839) 00:12:17.819 fused_ordering(840) 00:12:17.819 fused_ordering(841) 00:12:17.819 fused_ordering(842) 00:12:17.819 fused_ordering(843) 00:12:17.819 fused_ordering(844) 00:12:17.819 fused_ordering(845) 00:12:17.819 fused_ordering(846) 00:12:17.819 fused_ordering(847) 00:12:17.819 fused_ordering(848) 00:12:17.819 fused_ordering(849) 00:12:17.819 fused_ordering(850) 00:12:17.819 fused_ordering(851) 00:12:17.819 fused_ordering(852) 00:12:17.819 fused_ordering(853) 00:12:17.819 fused_ordering(854) 00:12:17.819 fused_ordering(855) 00:12:17.819 fused_ordering(856) 00:12:17.819 fused_ordering(857) 00:12:17.819 fused_ordering(858) 00:12:17.819 fused_ordering(859) 00:12:17.819 fused_ordering(860) 00:12:17.819 fused_ordering(861) 00:12:17.819 fused_ordering(862) 00:12:17.819 fused_ordering(863) 00:12:17.819 fused_ordering(864) 00:12:17.819 fused_ordering(865) 00:12:17.819 fused_ordering(866) 00:12:17.819 fused_ordering(867) 00:12:17.819 fused_ordering(868) 00:12:17.819 fused_ordering(869) 00:12:17.819 fused_ordering(870) 00:12:17.819 fused_ordering(871) 00:12:17.819 fused_ordering(872) 00:12:17.819 fused_ordering(873) 00:12:17.819 fused_ordering(874) 00:12:17.819 fused_ordering(875) 00:12:17.819 fused_ordering(876) 00:12:17.819 fused_ordering(877) 00:12:17.819 fused_ordering(878) 00:12:17.819 fused_ordering(879) 00:12:17.819 fused_ordering(880) 00:12:17.819 fused_ordering(881) 00:12:17.819 fused_ordering(882) 00:12:17.819 fused_ordering(883) 00:12:17.819 fused_ordering(884) 00:12:17.819 fused_ordering(885) 00:12:17.819 fused_ordering(886) 00:12:17.819 fused_ordering(887) 00:12:17.819 fused_ordering(888) 00:12:17.819 fused_ordering(889) 00:12:17.819 fused_ordering(890) 00:12:17.819 fused_ordering(891) 00:12:17.819 fused_ordering(892) 00:12:17.819 fused_ordering(893) 00:12:17.819 fused_ordering(894) 00:12:17.819 fused_ordering(895) 00:12:17.819 fused_ordering(896) 00:12:17.819 fused_ordering(897) 00:12:17.819 fused_ordering(898) 00:12:17.819 fused_ordering(899) 00:12:17.819 fused_ordering(900) 00:12:17.819 fused_ordering(901) 00:12:17.819 fused_ordering(902) 00:12:17.819 fused_ordering(903) 00:12:17.819 fused_ordering(904) 00:12:17.819 fused_ordering(905) 00:12:17.819 fused_ordering(906) 00:12:17.819 fused_ordering(907) 00:12:17.819 fused_ordering(908) 00:12:17.819 fused_ordering(909) 00:12:17.819 fused_ordering(910) 00:12:17.819 fused_ordering(911) 00:12:17.819 fused_ordering(912) 00:12:17.819 fused_ordering(913) 00:12:17.819 fused_ordering(914) 00:12:17.819 fused_ordering(915) 00:12:17.819 fused_ordering(916) 00:12:17.819 fused_ordering(917) 00:12:17.819 fused_ordering(918) 00:12:17.819 fused_ordering(919) 00:12:17.819 fused_ordering(920) 00:12:17.819 fused_ordering(921) 00:12:17.819 fused_ordering(922) 00:12:17.819 fused_ordering(923) 00:12:17.819 fused_ordering(924) 00:12:17.819 fused_ordering(925) 00:12:17.819 fused_ordering(926) 00:12:17.819 fused_ordering(927) 00:12:17.819 fused_ordering(928) 00:12:17.819 fused_ordering(929) 00:12:17.819 fused_ordering(930) 00:12:17.819 fused_ordering(931) 00:12:17.819 fused_ordering(932) 00:12:17.819 fused_ordering(933) 00:12:17.819 fused_ordering(934) 00:12:17.819 fused_ordering(935) 00:12:17.819 fused_ordering(936) 00:12:17.819 fused_ordering(937) 00:12:17.819 fused_ordering(938) 00:12:17.819 fused_ordering(939) 00:12:17.819 fused_ordering(940) 00:12:17.819 fused_ordering(941) 00:12:17.819 fused_ordering(942) 00:12:17.819 fused_ordering(943) 00:12:17.819 fused_ordering(944) 00:12:17.819 fused_ordering(945) 00:12:17.819 fused_ordering(946) 00:12:17.819 fused_ordering(947) 00:12:17.819 fused_ordering(948) 00:12:17.819 fused_ordering(949) 00:12:17.819 fused_ordering(950) 00:12:17.819 fused_ordering(951) 00:12:17.819 fused_ordering(952) 00:12:17.819 fused_ordering(953) 00:12:17.819 fused_ordering(954) 00:12:17.819 fused_ordering(955) 00:12:17.819 fused_ordering(956) 00:12:17.819 fused_ordering(957) 00:12:17.819 fused_ordering(958) 00:12:17.819 fused_ordering(959) 00:12:17.819 fused_ordering(960) 00:12:17.819 fused_ordering(961) 00:12:17.819 fused_ordering(962) 00:12:17.819 fused_ordering(963) 00:12:17.819 fused_ordering(964) 00:12:17.819 fused_ordering(965) 00:12:17.819 fused_ordering(966) 00:12:17.819 fused_ordering(967) 00:12:17.819 fused_ordering(968) 00:12:17.819 fused_ordering(969) 00:12:17.819 fused_ordering(970) 00:12:17.819 fused_ordering(971) 00:12:17.819 fused_ordering(972) 00:12:17.819 fused_ordering(973) 00:12:17.819 fused_ordering(974) 00:12:17.819 fused_ordering(975) 00:12:17.819 fused_ordering(976) 00:12:17.819 fused_ordering(977) 00:12:17.819 fused_ordering(978) 00:12:17.819 fused_ordering(979) 00:12:17.819 fused_ordering(980) 00:12:17.819 fused_ordering(981) 00:12:17.819 fused_ordering(982) 00:12:17.819 fused_ordering(983) 00:12:17.819 fused_ordering(984) 00:12:17.819 fused_ordering(985) 00:12:17.819 fused_ordering(986) 00:12:17.819 fused_ordering(987) 00:12:17.819 fused_ordering(988) 00:12:17.819 fused_ordering(989) 00:12:17.819 fused_ordering(990) 00:12:17.819 fused_ordering(991) 00:12:17.819 fused_ordering(992) 00:12:17.819 fused_ordering(993) 00:12:17.819 fused_ordering(994) 00:12:17.819 fused_ordering(995) 00:12:17.819 fused_ordering(996) 00:12:17.819 fused_ordering(997) 00:12:17.819 fused_ordering(998) 00:12:17.819 fused_ordering(999) 00:12:17.819 fused_ordering(1000) 00:12:17.819 fused_ordering(1001) 00:12:17.819 fused_ordering(1002) 00:12:17.819 fused_ordering(1003) 00:12:17.819 fused_ordering(1004) 00:12:17.819 fused_ordering(1005) 00:12:17.819 fused_ordering(1006) 00:12:17.819 fused_ordering(1007) 00:12:17.819 fused_ordering(1008) 00:12:17.819 fused_ordering(1009) 00:12:17.819 fused_ordering(1010) 00:12:17.819 fused_ordering(1011) 00:12:17.819 fused_ordering(1012) 00:12:17.819 fused_ordering(1013) 00:12:17.819 fused_ordering(1014) 00:12:17.819 fused_ordering(1015) 00:12:17.819 fused_ordering(1016) 00:12:17.819 fused_ordering(1017) 00:12:17.819 fused_ordering(1018) 00:12:17.819 fused_ordering(1019) 00:12:17.819 fused_ordering(1020) 00:12:17.819 fused_ordering(1021) 00:12:17.819 fused_ordering(1022) 00:12:17.819 fused_ordering(1023) 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:17.819 rmmod nvme_tcp 00:12:17.819 rmmod nvme_fabrics 00:12:17.819 rmmod nvme_keyring 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 70754 ']' 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 70754 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 70754 ']' 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 70754 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70754 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:17.819 08:52:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:17.820 killing process with pid 70754 00:12:17.820 08:52:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70754' 00:12:17.820 08:52:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 70754 00:12:17.820 [2024-05-15 08:52:33.898937] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:17.820 08:52:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 70754 00:12:18.090 08:52:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:18.090 08:52:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:18.090 08:52:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:18.090 08:52:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.090 08:52:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:18.090 08:52:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.090 08:52:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.090 08:52:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.090 08:52:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:18.090 00:12:18.090 real 0m4.137s 00:12:18.090 user 0m5.001s 00:12:18.090 sys 0m1.370s 00:12:18.090 08:52:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:18.090 08:52:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.090 ************************************ 00:12:18.090 END TEST nvmf_fused_ordering 00:12:18.090 ************************************ 00:12:18.090 08:52:34 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:18.090 08:52:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:18.090 08:52:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:18.090 08:52:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:18.090 ************************************ 00:12:18.090 START TEST nvmf_delete_subsystem 00:12:18.090 ************************************ 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:18.090 * Looking for test storage... 00:12:18.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.090 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:18.091 Cannot find device "nvmf_tgt_br" 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.091 Cannot find device "nvmf_tgt_br2" 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:18.091 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:18.350 Cannot find device "nvmf_tgt_br" 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:18.350 Cannot find device "nvmf_tgt_br2" 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:18.350 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:18.610 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:18.610 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:18.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:12:18.610 00:12:18.610 --- 10.0.0.2 ping statistics --- 00:12:18.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.610 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:12:18.610 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:18.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:18.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:12:18.610 00:12:18.610 --- 10.0.0.3 ping statistics --- 00:12:18.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.610 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:18.610 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:18.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:18.610 00:12:18.610 --- 10.0.0.1 ping statistics --- 00:12:18.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.610 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71017 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71017 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 71017 ']' 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:18.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:18.611 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.611 [2024-05-15 08:52:34.693690] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:18.611 [2024-05-15 08:52:34.693810] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.611 [2024-05-15 08:52:34.832786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:18.870 [2024-05-15 08:52:34.893910] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.870 [2024-05-15 08:52:34.893976] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.870 [2024-05-15 08:52:34.893988] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.870 [2024-05-15 08:52:34.893996] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.870 [2024-05-15 08:52:34.894003] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.870 [2024-05-15 08:52:34.894075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.870 [2024-05-15 08:52:34.894086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.870 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:18.870 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:12:18.870 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:18.870 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.870 08:52:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.870 [2024-05-15 08:52:35.027466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.870 [2024-05-15 08:52:35.047394] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:18.870 [2024-05-15 08:52:35.047661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.870 NULL1 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.870 Delay0 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71053 00:12:18.870 08:52:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:19.128 [2024-05-15 08:52:35.248420] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:21.029 08:52:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.029 08:52:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.029 08:52:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 starting I/O failed: -6 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 starting I/O failed: -6 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 starting I/O failed: -6 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 starting I/O failed: -6 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 starting I/O failed: -6 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 starting I/O failed: -6 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 starting I/O failed: -6 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 starting I/O failed: -6 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 starting I/O failed: -6 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 starting I/O failed: -6 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 starting I/O failed: -6 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 [2024-05-15 08:52:37.288145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf5040 is same with the state(5) to be set 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Write completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.288 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 starting I/O failed: -6 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 starting I/O failed: -6 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 starting I/O failed: -6 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 starting I/O failed: -6 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 starting I/O failed: -6 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 starting I/O failed: -6 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 starting I/O failed: -6 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 starting I/O failed: -6 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 starting I/O failed: -6 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 starting I/O failed: -6 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 [2024-05-15 08:52:37.290367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2808000c00 is same with the state(5) to be set 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Read completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:21.289 Write completed with error (sct=0, sc=8) 00:12:22.227 [2024-05-15 08:52:38.269129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf3100 is same with the state(5) to be set 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 [2024-05-15 08:52:38.289019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f280800bfe0 is same with the state(5) to be set 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 [2024-05-15 08:52:38.289279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf5220 is same with the state(5) to be set 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 [2024-05-15 08:52:38.290248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f280800c600 is same with the state(5) to be set 00:12:22.227 Initializing NVMe Controllers 00:12:22.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:22.227 Controller IO queue size 128, less than required. 00:12:22.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:22.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:22.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:22.227 Initialization complete. Launching workers. 00:12:22.227 ======================================================== 00:12:22.227 Latency(us) 00:12:22.227 Device Information : IOPS MiB/s Average min max 00:12:22.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.08 0.08 897958.68 412.07 1015647.87 00:12:22.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.63 0.08 928167.55 843.78 2001209.28 00:12:22.227 ======================================================== 00:12:22.227 Total : 332.71 0.16 912815.50 412.07 2001209.28 00:12:22.227 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Write completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 Read completed with error (sct=0, sc=8) 00:12:22.227 [2024-05-15 08:52:38.290672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf3ff0 is same with the state(5) to be set 00:12:22.227 [2024-05-15 08:52:38.291419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf3100 (9): Bad file descriptor 00:12:22.227 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:22.227 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.227 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:22.227 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71053 00:12:22.227 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71053 00:12:22.825 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71053) - No such process 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71053 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71053 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71053 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:22.825 [2024-05-15 08:52:38.816926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71100 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71100 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:22.825 08:52:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:22.825 [2024-05-15 08:52:38.998256] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:23.393 08:52:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:23.393 08:52:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71100 00:12:23.393 08:52:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:23.652 08:52:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:23.652 08:52:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71100 00:12:23.652 08:52:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:24.218 08:52:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:24.218 08:52:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71100 00:12:24.218 08:52:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:24.786 08:52:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:24.786 08:52:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71100 00:12:24.786 08:52:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.355 08:52:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.355 08:52:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71100 00:12:25.355 08:52:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.921 08:52:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.922 08:52:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71100 00:12:25.922 08:52:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.922 Initializing NVMe Controllers 00:12:25.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:25.922 Controller IO queue size 128, less than required. 00:12:25.922 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:25.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:25.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:25.922 Initialization complete. Launching workers. 00:12:25.922 ======================================================== 00:12:25.922 Latency(us) 00:12:25.922 Device Information : IOPS MiB/s Average min max 00:12:25.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003455.71 1000155.13 1012094.95 00:12:25.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005374.98 1000314.85 1012877.41 00:12:25.922 ======================================================== 00:12:25.922 Total : 256.00 0.12 1004415.35 1000155.13 1012877.41 00:12:25.922 00:12:26.204 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:26.204 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71100 00:12:26.204 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71100) - No such process 00:12:26.204 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71100 00:12:26.204 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:26.204 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:26.204 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.204 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:26.204 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.204 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:26.204 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.204 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.204 rmmod nvme_tcp 00:12:26.204 rmmod nvme_fabrics 00:12:26.204 rmmod nvme_keyring 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71017 ']' 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71017 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 71017 ']' 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 71017 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71017 00:12:26.463 killing process with pid 71017 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71017' 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 71017 00:12:26.463 [2024-05-15 08:52:42.472988] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 71017 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.463 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:26.721 00:12:26.721 real 0m8.525s 00:12:26.721 user 0m27.052s 00:12:26.721 sys 0m1.426s 00:12:26.721 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:26.721 08:52:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.721 ************************************ 00:12:26.721 END TEST nvmf_delete_subsystem 00:12:26.721 ************************************ 00:12:26.721 08:52:42 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:26.721 08:52:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:26.722 08:52:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:26.722 08:52:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:26.722 ************************************ 00:12:26.722 START TEST nvmf_ns_masking 00:12:26.722 ************************************ 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:26.722 * Looking for test storage... 00:12:26.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=5cf0732b-85df-4d2d-905e-5b6404540baf 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:26.722 Cannot find device "nvmf_tgt_br" 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:26.722 Cannot find device "nvmf_tgt_br2" 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:26.722 Cannot find device "nvmf_tgt_br" 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:26.722 Cannot find device "nvmf_tgt_br2" 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:26.722 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:26.979 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:26.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.979 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:12:26.979 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:26.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.979 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:12:26.979 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:26.979 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:26.979 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:26.979 08:52:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:26.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:12:26.979 00:12:26.979 --- 10.0.0.2 ping statistics --- 00:12:26.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.979 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:26.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:26.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:12:26.979 00:12:26.979 --- 10.0.0.3 ping statistics --- 00:12:26.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.979 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:26.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:26.979 00:12:26.979 --- 10.0.0.1 ping statistics --- 00:12:26.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.979 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.979 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=71329 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 71329 00:12:26.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 71329 ']' 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.980 08:52:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:27.237 [2024-05-15 08:52:43.241668] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:27.237 [2024-05-15 08:52:43.242029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.237 [2024-05-15 08:52:43.383789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.237 [2024-05-15 08:52:43.454843] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.237 [2024-05-15 08:52:43.455106] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.237 [2024-05-15 08:52:43.455268] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.237 [2024-05-15 08:52:43.455422] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.237 [2024-05-15 08:52:43.455468] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.237 [2024-05-15 08:52:43.455646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.237 [2024-05-15 08:52:43.455798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.237 [2024-05-15 08:52:43.456242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.237 [2024-05-15 08:52:43.456281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.204 08:52:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:28.204 08:52:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:12:28.204 08:52:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.204 08:52:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:28.204 08:52:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:28.204 08:52:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.204 08:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:28.462 [2024-05-15 08:52:44.503359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.462 08:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:28.462 08:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:28.462 08:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:28.722 Malloc1 00:12:28.722 08:52:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:28.981 Malloc2 00:12:28.981 08:52:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:29.239 08:52:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:29.498 08:52:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.756 [2024-05-15 08:52:45.903922] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:29.756 [2024-05-15 08:52:45.904220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.756 08:52:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:12:29.756 08:52:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5cf0732b-85df-4d2d-905e-5b6404540baf -a 10.0.0.2 -s 4420 -i 4 00:12:30.015 08:52:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.015 08:52:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:12:30.015 08:52:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.015 08:52:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:30.015 08:52:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:31.916 [ 0]:0x1 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.916 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:32.241 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=59b10c81684949e4b4c27ea3f2fe39ce 00:12:32.241 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 59b10c81684949e4b4c27ea3f2fe39ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:32.241 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:32.513 [ 0]:0x1 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=59b10c81684949e4b4c27ea3f2fe39ce 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 59b10c81684949e4b4c27ea3f2fe39ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:32.513 [ 1]:0x2 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a87bbdf612441e2bf8d262464392425 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a87bbdf612441e2bf8d262464392425 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.513 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.771 08:52:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:33.335 08:52:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:12:33.335 08:52:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5cf0732b-85df-4d2d-905e-5b6404540baf -a 10.0.0.2 -s 4420 -i 4 00:12:33.335 08:52:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:33.335 08:52:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:12:33.335 08:52:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.335 08:52:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:12:33.335 08:52:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:12:33.335 08:52:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.303 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:35.582 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:35.582 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.582 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:35.582 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:35.582 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.582 08:52:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.582 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:35.582 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:35.582 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:35.582 [ 0]:0x2 00:12:35.583 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.583 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:35.583 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a87bbdf612441e2bf8d262464392425 00:12:35.583 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a87bbdf612441e2bf8d262464392425 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.583 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:35.840 [ 0]:0x1 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=59b10c81684949e4b4c27ea3f2fe39ce 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 59b10c81684949e4b4c27ea3f2fe39ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:35.840 [ 1]:0x2 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a87bbdf612441e2bf8d262464392425 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a87bbdf612441e2bf8d262464392425 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.840 08:52:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:36.098 [ 0]:0x2 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a87bbdf612441e2bf8d262464392425 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a87bbdf612441e2bf8d262464392425 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:12:36.098 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.356 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:36.615 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:12:36.615 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5cf0732b-85df-4d2d-905e-5b6404540baf -a 10.0.0.2 -s 4420 -i 4 00:12:36.615 08:52:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:36.615 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:12:36.615 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.615 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:12:36.615 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:12:36.615 08:52:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:39.144 [ 0]:0x1 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=59b10c81684949e4b4c27ea3f2fe39ce 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 59b10c81684949e4b4c27ea3f2fe39ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:39.144 [ 1]:0x2 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a87bbdf612441e2bf8d262464392425 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a87bbdf612441e2bf8d262464392425 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.144 08:52:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:39.144 [ 0]:0x2 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.144 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a87bbdf612441e2bf8d262464392425 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a87bbdf612441e2bf8d262464392425 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:39.403 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.661 [2024-05-15 08:52:55.662971] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:39.661 2024/05/15 08:52:55 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:12:39.661 request: 00:12:39.661 { 00:12:39.661 "method": "nvmf_ns_remove_host", 00:12:39.661 "params": { 00:12:39.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:39.661 "nsid": 2, 00:12:39.661 "host": "nqn.2016-06.io.spdk:host1" 00:12:39.661 } 00:12:39.661 } 00:12:39.661 Got JSON-RPC error response 00:12:39.661 GoRPCClient: error on JSON-RPC call 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:39.661 [ 0]:0x2 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a87bbdf612441e2bf8d262464392425 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a87bbdf612441e2bf8d262464392425 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:12:39.661 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.662 08:52:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.920 08:52:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:39.920 08:52:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:39.920 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:39.920 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:39.920 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:39.920 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:39.920 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:39.920 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:39.920 rmmod nvme_tcp 00:12:39.920 rmmod nvme_fabrics 00:12:39.920 rmmod nvme_keyring 00:12:39.920 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:39.920 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:39.920 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 71329 ']' 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 71329 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 71329 ']' 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 71329 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71329 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:40.179 killing process with pid 71329 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71329' 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 71329 00:12:40.179 [2024-05-15 08:52:56.181175] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 71329 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.179 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.438 08:52:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:40.438 00:12:40.438 real 0m13.675s 00:12:40.438 user 0m55.002s 00:12:40.438 sys 0m2.316s 00:12:40.438 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:40.438 08:52:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:40.438 ************************************ 00:12:40.438 END TEST nvmf_ns_masking 00:12:40.438 ************************************ 00:12:40.438 08:52:56 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:12:40.438 08:52:56 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:12:40.438 08:52:56 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:40.438 08:52:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:40.438 08:52:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:40.438 08:52:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:40.438 ************************************ 00:12:40.438 START TEST nvmf_host_management 00:12:40.438 ************************************ 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:40.438 * Looking for test storage... 00:12:40.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:40.438 Cannot find device "nvmf_tgt_br" 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:40.438 Cannot find device "nvmf_tgt_br2" 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:40.438 Cannot find device "nvmf_tgt_br" 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:40.438 Cannot find device "nvmf_tgt_br2" 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:40.438 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:40.696 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:40.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:40.696 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:12:40.696 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:40.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:40.696 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:40.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:12:40.697 00:12:40.697 --- 10.0.0.2 ping statistics --- 00:12:40.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.697 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:40.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:40.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:12:40.697 00:12:40.697 --- 10.0.0.3 ping statistics --- 00:12:40.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.697 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:40.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:40.697 00:12:40.697 --- 10.0.0.1 ping statistics --- 00:12:40.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.697 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=71893 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 71893 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 71893 ']' 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:40.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:40.697 08:52:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:40.955 [2024-05-15 08:52:56.953316] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:40.955 [2024-05-15 08:52:56.953413] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.955 [2024-05-15 08:52:57.090017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.955 [2024-05-15 08:52:57.151336] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.955 [2024-05-15 08:52:57.151599] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.955 [2024-05-15 08:52:57.151823] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.955 [2024-05-15 08:52:57.152011] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.955 [2024-05-15 08:52:57.152034] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.955 [2024-05-15 08:52:57.152242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.955 [2024-05-15 08:52:57.152357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.955 [2024-05-15 08:52:57.152429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:40.955 [2024-05-15 08:52:57.152605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.889 [2024-05-15 08:52:57.934243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.889 08:52:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.889 Malloc0 00:12:41.889 [2024-05-15 08:52:58.009885] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:41.889 [2024-05-15 08:52:58.010634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=71965 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 71965 /var/tmp/bdevperf.sock 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 71965 ']' 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:41.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:41.889 { 00:12:41.889 "params": { 00:12:41.889 "name": "Nvme$subsystem", 00:12:41.889 "trtype": "$TEST_TRANSPORT", 00:12:41.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.889 "adrfam": "ipv4", 00:12:41.889 "trsvcid": "$NVMF_PORT", 00:12:41.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.889 "hdgst": ${hdgst:-false}, 00:12:41.889 "ddgst": ${ddgst:-false} 00:12:41.889 }, 00:12:41.889 "method": "bdev_nvme_attach_controller" 00:12:41.889 } 00:12:41.889 EOF 00:12:41.889 )") 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:41.889 08:52:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:41.889 "params": { 00:12:41.889 "name": "Nvme0", 00:12:41.889 "trtype": "tcp", 00:12:41.889 "traddr": "10.0.0.2", 00:12:41.890 "adrfam": "ipv4", 00:12:41.890 "trsvcid": "4420", 00:12:41.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:41.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:41.890 "hdgst": false, 00:12:41.890 "ddgst": false 00:12:41.890 }, 00:12:41.890 "method": "bdev_nvme_attach_controller" 00:12:41.890 }' 00:12:41.890 [2024-05-15 08:52:58.111405] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:41.890 [2024-05-15 08:52:58.111508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71965 ] 00:12:42.148 [2024-05-15 08:52:58.251079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.148 [2024-05-15 08:52:58.310951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.407 Running I/O for 10 seconds... 00:12:43.001 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:43.001 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:12:43.001 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:43.001 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.001 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.001 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.001 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.002 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.002 [2024-05-15 08:52:59.231243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.231534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.231704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.231883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1162d10 is same with the state(5) to be set 00:12:43.002 [2024-05-15 08:52:59.232947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.002 [2024-05-15 08:52:59.233132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.262 [2024-05-15 08:52:59.233683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.262 [2024-05-15 08:52:59.233693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.233982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.233991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.234542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.234951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.235033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.235107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.263 [2024-05-15 08:52:59.235164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.263 [2024-05-15 08:52:59.235278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.264 [2024-05-15 08:52:59.235405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.264 [2024-05-15 08:52:59.235602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.264 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.264 [2024-05-15 08:52:59.235787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.264 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:43.264 [2024-05-15 08:52:59.235939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:12 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.264 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.264 [2024-05-15 08:52:59.236068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.264 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.264 [2024-05-15 08:52:59.236186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.264 [2024-05-15 08:52:59.236205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.264 [2024-05-15 08:52:59.236217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.264 [2024-05-15 08:52:59.236227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.264 [2024-05-15 08:52:59.236239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.264 [2024-05-15 08:52:59.236249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.264 [2024-05-15 08:52:59.236261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:43.264 [2024-05-15 08:52:59.236284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.264 [2024-05-15 08:52:59.236303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddb4f0 is same with the state(5) to be set 00:12:43.264 [2024-05-15 08:52:59.236370] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ddb4f0 was disconnected and freed. reset controller. 00:12:43.264 [2024-05-15 08:52:59.236460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.264 [2024-05-15 08:52:59.236476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.264 [2024-05-15 08:52:59.236487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.264 [2024-05-15 08:52:59.236497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.264 [2024-05-15 08:52:59.236507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.264 [2024-05-15 08:52:59.236516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.264 [2024-05-15 08:52:59.236526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.264 [2024-05-15 08:52:59.236535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.264 [2024-05-15 08:52:59.236544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9740 is same with the state(5) to be set 00:12:43.264 [2024-05-15 08:52:59.237784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:43.264 task offset: 16384 on job bdev=Nvme0n1 fails 00:12:43.264 00:12:43.264 Latency(us) 00:12:43.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.264 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:43.264 Job: Nvme0n1 ended in about 0.78 seconds with error 00:12:43.264 Verification LBA range: start 0x0 length 0x400 00:12:43.264 Nvme0n1 : 0.78 1468.86 91.80 81.60 0.00 40335.41 8817.57 36461.85 00:12:43.264 =================================================================================================================== 00:12:43.264 Total : 1468.86 91.80 81.60 0.00 40335.41 8817.57 36461.85 00:12:43.264 [2024-05-15 08:52:59.239924] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:43.264 [2024-05-15 08:52:59.239958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd9740 (9): Bad file descriptor 00:12:43.264 08:52:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.264 08:52:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:43.264 [2024-05-15 08:52:59.252855] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 71965 00:12:44.200 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71965) - No such process 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:44.200 { 00:12:44.200 "params": { 00:12:44.200 "name": "Nvme$subsystem", 00:12:44.200 "trtype": "$TEST_TRANSPORT", 00:12:44.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:44.200 "adrfam": "ipv4", 00:12:44.200 "trsvcid": "$NVMF_PORT", 00:12:44.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:44.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:44.200 "hdgst": ${hdgst:-false}, 00:12:44.200 "ddgst": ${ddgst:-false} 00:12:44.200 }, 00:12:44.200 "method": "bdev_nvme_attach_controller" 00:12:44.200 } 00:12:44.200 EOF 00:12:44.200 )") 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:44.200 08:53:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:44.200 "params": { 00:12:44.200 "name": "Nvme0", 00:12:44.200 "trtype": "tcp", 00:12:44.200 "traddr": "10.0.0.2", 00:12:44.200 "adrfam": "ipv4", 00:12:44.200 "trsvcid": "4420", 00:12:44.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:44.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:44.200 "hdgst": false, 00:12:44.200 "ddgst": false 00:12:44.200 }, 00:12:44.200 "method": "bdev_nvme_attach_controller" 00:12:44.200 }' 00:12:44.200 [2024-05-15 08:53:00.312900] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:44.200 [2024-05-15 08:53:00.313004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72015 ] 00:12:44.458 [2024-05-15 08:53:00.492398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.458 [2024-05-15 08:53:00.593215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.716 Running I/O for 1 seconds... 00:12:45.650 00:12:45.650 Latency(us) 00:12:45.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.650 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:45.650 Verification LBA range: start 0x0 length 0x400 00:12:45.650 Nvme0n1 : 1.00 1147.74 71.73 0.00 0.00 53527.26 6672.76 86269.21 00:12:45.650 =================================================================================================================== 00:12:45.650 Total : 1147.74 71.73 0.00 0.00 53527.26 6672.76 86269.21 00:12:45.908 08:53:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:45.908 08:53:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:45.908 08:53:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:12:45.908 08:53:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:45.908 08:53:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:45.908 08:53:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:45.908 08:53:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:45.908 08:53:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:45.908 08:53:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:45.908 08:53:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:45.908 08:53:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:45.908 rmmod nvme_tcp 00:12:45.908 rmmod nvme_fabrics 00:12:45.908 rmmod nvme_keyring 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 71893 ']' 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 71893 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 71893 ']' 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 71893 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71893 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:45.908 killing process with pid 71893 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71893' 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 71893 00:12:45.908 [2024-05-15 08:53:02.064215] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:45.908 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 71893 00:12:46.166 [2024-05-15 08:53:02.241959] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:46.166 00:12:46.166 real 0m5.832s 00:12:46.166 user 0m23.177s 00:12:46.166 sys 0m1.214s 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:46.166 08:53:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:46.166 ************************************ 00:12:46.166 END TEST nvmf_host_management 00:12:46.166 ************************************ 00:12:46.166 08:53:02 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:46.166 08:53:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:46.166 08:53:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:46.166 08:53:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:46.166 ************************************ 00:12:46.166 START TEST nvmf_lvol 00:12:46.166 ************************************ 00:12:46.166 08:53:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:46.425 * Looking for test storage... 00:12:46.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.425 08:53:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:46.426 Cannot find device "nvmf_tgt_br" 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:46.426 Cannot find device "nvmf_tgt_br2" 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:46.426 Cannot find device "nvmf_tgt_br" 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:46.426 Cannot find device "nvmf_tgt_br2" 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:46.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:46.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:46.426 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:46.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:12:46.684 00:12:46.684 --- 10.0.0.2 ping statistics --- 00:12:46.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.684 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:46.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:46.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:12:46.684 00:12:46.684 --- 10.0.0.3 ping statistics --- 00:12:46.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.684 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:46.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:46.684 00:12:46.684 --- 10.0.0.1 ping statistics --- 00:12:46.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.684 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:46.684 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=72218 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 72218 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 72218 ']' 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:46.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:46.685 08:53:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:46.685 [2024-05-15 08:53:02.857683] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:46.685 [2024-05-15 08:53:02.857793] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.943 [2024-05-15 08:53:02.996953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:46.943 [2024-05-15 08:53:03.065653] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.943 [2024-05-15 08:53:03.065714] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.943 [2024-05-15 08:53:03.065728] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.943 [2024-05-15 08:53:03.065738] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.943 [2024-05-15 08:53:03.065748] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.943 [2024-05-15 08:53:03.066132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.943 [2024-05-15 08:53:03.066280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.943 [2024-05-15 08:53:03.066287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.914 08:53:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:47.914 08:53:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:12:47.914 08:53:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.914 08:53:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.914 08:53:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:47.914 08:53:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.914 08:53:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:47.914 [2024-05-15 08:53:04.062351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.914 08:53:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:48.172 08:53:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:48.172 08:53:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:48.430 08:53:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:48.430 08:53:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:48.786 08:53:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:49.045 08:53:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bf136d31-6530-492e-9c9e-51ddee4fa3da 00:12:49.045 08:53:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bf136d31-6530-492e-9c9e-51ddee4fa3da lvol 20 00:12:49.613 08:53:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=288e2c18-d516-41a8-9689-a980a8410a41 00:12:49.613 08:53:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:49.871 08:53:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 288e2c18-d516-41a8-9689-a980a8410a41 00:12:50.130 08:53:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:50.387 [2024-05-15 08:53:06.466928] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:50.388 [2024-05-15 08:53:06.467201] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.388 08:53:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:50.646 08:53:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=72370 00:12:50.646 08:53:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:50.646 08:53:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:51.580 08:53:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 288e2c18-d516-41a8-9689-a980a8410a41 MY_SNAPSHOT 00:12:52.147 08:53:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1f0dd579-a806-4504-bc96-915dfc4c35e0 00:12:52.147 08:53:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 288e2c18-d516-41a8-9689-a980a8410a41 30 00:12:52.412 08:53:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1f0dd579-a806-4504-bc96-915dfc4c35e0 MY_CLONE 00:12:52.671 08:53:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a464cce3-14c1-42d0-9487-a7998f675ec3 00:12:52.671 08:53:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a464cce3-14c1-42d0-9487-a7998f675ec3 00:12:53.606 08:53:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 72370 00:13:01.711 Initializing NVMe Controllers 00:13:01.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:01.711 Controller IO queue size 128, less than required. 00:13:01.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:01.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:01.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:01.712 Initialization complete. Launching workers. 00:13:01.712 ======================================================== 00:13:01.712 Latency(us) 00:13:01.712 Device Information : IOPS MiB/s Average min max 00:13:01.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9967.90 38.94 12848.56 2047.30 64682.03 00:13:01.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10015.80 39.12 12783.88 691.03 139286.80 00:13:01.712 ======================================================== 00:13:01.712 Total : 19983.70 78.06 12816.14 691.03 139286.80 00:13:01.712 00:13:01.712 08:53:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 288e2c18-d516-41a8-9689-a980a8410a41 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bf136d31-6530-492e-9c9e-51ddee4fa3da 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:01.712 rmmod nvme_tcp 00:13:01.712 rmmod nvme_fabrics 00:13:01.712 rmmod nvme_keyring 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 72218 ']' 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 72218 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 72218 ']' 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 72218 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:01.712 08:53:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72218 00:13:01.970 killing process with pid 72218 00:13:01.970 08:53:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:01.970 08:53:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:01.970 08:53:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72218' 00:13:01.970 08:53:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 72218 00:13:01.970 [2024-05-15 08:53:17.948938] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:01.970 08:53:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 72218 00:13:01.970 08:53:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:01.970 08:53:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:01.970 08:53:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:01.970 08:53:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:01.970 08:53:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:01.970 08:53:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.970 08:53:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.970 08:53:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.970 08:53:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:01.970 00:13:01.970 real 0m15.851s 00:13:01.970 user 1m6.469s 00:13:01.970 sys 0m3.928s 00:13:01.970 08:53:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:01.970 08:53:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:01.970 ************************************ 00:13:01.970 END TEST nvmf_lvol 00:13:01.970 ************************************ 00:13:02.228 08:53:18 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:02.228 08:53:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:02.228 08:53:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.228 08:53:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.228 ************************************ 00:13:02.228 START TEST nvmf_lvs_grow 00:13:02.228 ************************************ 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:02.228 * Looking for test storage... 00:13:02.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.228 08:53:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:02.229 Cannot find device "nvmf_tgt_br" 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:02.229 Cannot find device "nvmf_tgt_br2" 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:02.229 Cannot find device "nvmf_tgt_br" 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:02.229 Cannot find device "nvmf_tgt_br2" 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:13:02.229 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:02.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:02.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:02.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:13:02.488 00:13:02.488 --- 10.0.0.2 ping statistics --- 00:13:02.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.488 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:02.488 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:02.488 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:13:02.488 00:13:02.488 --- 10.0.0.3 ping statistics --- 00:13:02.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.488 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:02.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:13:02.488 00:13:02.488 --- 10.0.0.1 ping statistics --- 00:13:02.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.488 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=72735 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 72735 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 72735 ']' 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:02.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:02.488 08:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:02.745 [2024-05-15 08:53:18.777115] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:02.745 [2024-05-15 08:53:18.777215] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.745 [2024-05-15 08:53:18.922271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.003 [2024-05-15 08:53:18.981404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.003 [2024-05-15 08:53:18.981454] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.003 [2024-05-15 08:53:18.981466] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.003 [2024-05-15 08:53:18.981474] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.003 [2024-05-15 08:53:18.981482] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.003 [2024-05-15 08:53:18.981506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.569 08:53:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:03.569 08:53:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:13:03.569 08:53:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.569 08:53:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.569 08:53:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:03.826 08:53:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.826 08:53:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:04.083 [2024-05-15 08:53:20.069993] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:04.083 ************************************ 00:13:04.083 START TEST lvs_grow_clean 00:13:04.083 ************************************ 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:04.083 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:04.340 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:04.340 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:04.597 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:04.597 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:04.597 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:04.855 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:04.855 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:04.855 08:53:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 21a75d6f-380d-4047-bf33-8f3b360cfebf lvol 150 00:13:05.113 08:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=881af59d-8baf-4597-97a5-ac786a014757 00:13:05.113 08:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:05.113 08:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:05.371 [2024-05-15 08:53:21.508486] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:05.371 [2024-05-15 08:53:21.508588] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:05.371 true 00:13:05.371 08:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:05.371 08:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:05.629 08:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:05.629 08:53:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:06.196 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 881af59d-8baf-4597-97a5-ac786a014757 00:13:06.196 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:06.454 [2024-05-15 08:53:22.677416] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:06.454 [2024-05-15 08:53:22.677713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.712 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:06.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.712 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72899 00:13:06.712 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:06.712 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72899 /var/tmp/bdevperf.sock 00:13:06.712 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:06.712 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 72899 ']' 00:13:06.712 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.712 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:06.712 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.712 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:06.712 08:53:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:06.992 [2024-05-15 08:53:22.988489] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:06.992 [2024-05-15 08:53:22.988614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72899 ] 00:13:06.992 [2024-05-15 08:53:23.122240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.992 [2024-05-15 08:53:23.181434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.255 08:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:07.255 08:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:13:07.255 08:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:07.513 Nvme0n1 00:13:07.513 08:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:07.771 [ 00:13:07.771 { 00:13:07.771 "aliases": [ 00:13:07.771 "881af59d-8baf-4597-97a5-ac786a014757" 00:13:07.771 ], 00:13:07.771 "assigned_rate_limits": { 00:13:07.771 "r_mbytes_per_sec": 0, 00:13:07.771 "rw_ios_per_sec": 0, 00:13:07.771 "rw_mbytes_per_sec": 0, 00:13:07.771 "w_mbytes_per_sec": 0 00:13:07.771 }, 00:13:07.771 "block_size": 4096, 00:13:07.771 "claimed": false, 00:13:07.771 "driver_specific": { 00:13:07.771 "mp_policy": "active_passive", 00:13:07.771 "nvme": [ 00:13:07.771 { 00:13:07.771 "ctrlr_data": { 00:13:07.771 "ana_reporting": false, 00:13:07.771 "cntlid": 1, 00:13:07.771 "firmware_revision": "24.05", 00:13:07.771 "model_number": "SPDK bdev Controller", 00:13:07.771 "multi_ctrlr": true, 00:13:07.771 "oacs": { 00:13:07.771 "firmware": 0, 00:13:07.771 "format": 0, 00:13:07.771 "ns_manage": 0, 00:13:07.771 "security": 0 00:13:07.771 }, 00:13:07.771 "serial_number": "SPDK0", 00:13:07.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:07.771 "vendor_id": "0x8086" 00:13:07.771 }, 00:13:07.771 "ns_data": { 00:13:07.771 "can_share": true, 00:13:07.771 "id": 1 00:13:07.771 }, 00:13:07.771 "trid": { 00:13:07.771 "adrfam": "IPv4", 00:13:07.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:07.771 "traddr": "10.0.0.2", 00:13:07.771 "trsvcid": "4420", 00:13:07.771 "trtype": "TCP" 00:13:07.771 }, 00:13:07.771 "vs": { 00:13:07.771 "nvme_version": "1.3" 00:13:07.771 } 00:13:07.771 } 00:13:07.771 ] 00:13:07.771 }, 00:13:07.771 "memory_domains": [ 00:13:07.771 { 00:13:07.771 "dma_device_id": "system", 00:13:07.771 "dma_device_type": 1 00:13:07.771 } 00:13:07.771 ], 00:13:07.771 "name": "Nvme0n1", 00:13:07.771 "num_blocks": 38912, 00:13:07.771 "product_name": "NVMe disk", 00:13:07.771 "supported_io_types": { 00:13:07.771 "abort": true, 00:13:07.771 "compare": true, 00:13:07.771 "compare_and_write": true, 00:13:07.771 "flush": true, 00:13:07.771 "nvme_admin": true, 00:13:07.771 "nvme_io": true, 00:13:07.771 "read": true, 00:13:07.771 "reset": true, 00:13:07.771 "unmap": true, 00:13:07.771 "write": true, 00:13:07.771 "write_zeroes": true 00:13:07.771 }, 00:13:07.771 "uuid": "881af59d-8baf-4597-97a5-ac786a014757", 00:13:07.771 "zoned": false 00:13:07.771 } 00:13:07.771 ] 00:13:07.771 08:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72933 00:13:07.771 08:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:07.771 08:53:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:08.029 Running I/O for 10 seconds... 00:13:08.962 Latency(us) 00:13:08.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.962 Nvme0n1 : 1.00 7699.00 30.07 0.00 0.00 0.00 0.00 0.00 00:13:08.962 =================================================================================================================== 00:13:08.962 Total : 7699.00 30.07 0.00 0.00 0.00 0.00 0.00 00:13:08.962 00:13:09.896 08:53:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:09.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.896 Nvme0n1 : 2.00 7671.50 29.97 0.00 0.00 0.00 0.00 0.00 00:13:09.896 =================================================================================================================== 00:13:09.896 Total : 7671.50 29.97 0.00 0.00 0.00 0.00 0.00 00:13:09.896 00:13:10.154 true 00:13:10.154 08:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:10.154 08:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:10.412 08:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:10.412 08:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:10.412 08:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 72933 00:13:10.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.978 Nvme0n1 : 3.00 7748.33 30.27 0.00 0.00 0.00 0.00 0.00 00:13:10.978 =================================================================================================================== 00:13:10.978 Total : 7748.33 30.27 0.00 0.00 0.00 0.00 0.00 00:13:10.978 00:13:11.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.912 Nvme0n1 : 4.00 7782.00 30.40 0.00 0.00 0.00 0.00 0.00 00:13:11.912 =================================================================================================================== 00:13:11.912 Total : 7782.00 30.40 0.00 0.00 0.00 0.00 0.00 00:13:11.912 00:13:12.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.846 Nvme0n1 : 5.00 7775.60 30.37 0.00 0.00 0.00 0.00 0.00 00:13:12.846 =================================================================================================================== 00:13:12.846 Total : 7775.60 30.37 0.00 0.00 0.00 0.00 0.00 00:13:12.846 00:13:13.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.780 Nvme0n1 : 6.00 7762.33 30.32 0.00 0.00 0.00 0.00 0.00 00:13:13.780 =================================================================================================================== 00:13:13.780 Total : 7762.33 30.32 0.00 0.00 0.00 0.00 0.00 00:13:13.780 00:13:15.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:15.159 Nvme0n1 : 7.00 7765.14 30.33 0.00 0.00 0.00 0.00 0.00 00:13:15.159 =================================================================================================================== 00:13:15.159 Total : 7765.14 30.33 0.00 0.00 0.00 0.00 0.00 00:13:15.159 00:13:16.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.093 Nvme0n1 : 8.00 7739.50 30.23 0.00 0.00 0.00 0.00 0.00 00:13:16.093 =================================================================================================================== 00:13:16.093 Total : 7739.50 30.23 0.00 0.00 0.00 0.00 0.00 00:13:16.093 00:13:17.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.028 Nvme0n1 : 9.00 7720.56 30.16 0.00 0.00 0.00 0.00 0.00 00:13:17.028 =================================================================================================================== 00:13:17.028 Total : 7720.56 30.16 0.00 0.00 0.00 0.00 0.00 00:13:17.028 00:13:17.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.965 Nvme0n1 : 10.00 7717.50 30.15 0.00 0.00 0.00 0.00 0.00 00:13:17.965 =================================================================================================================== 00:13:17.965 Total : 7717.50 30.15 0.00 0.00 0.00 0.00 0.00 00:13:17.965 00:13:17.965 00:13:17.965 Latency(us) 00:13:17.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.965 Nvme0n1 : 10.01 7720.54 30.16 0.00 0.00 16573.74 7983.48 33363.78 00:13:17.965 =================================================================================================================== 00:13:17.965 Total : 7720.54 30.16 0.00 0.00 16573.74 7983.48 33363.78 00:13:17.965 0 00:13:17.965 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72899 00:13:17.965 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 72899 ']' 00:13:17.965 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 72899 00:13:17.965 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:13:17.965 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:17.965 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72899 00:13:17.965 killing process with pid 72899 00:13:17.965 Received shutdown signal, test time was about 10.000000 seconds 00:13:17.965 00:13:17.965 Latency(us) 00:13:17.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.965 =================================================================================================================== 00:13:17.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.965 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:17.965 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:17.965 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72899' 00:13:17.965 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 72899 00:13:17.965 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 72899 00:13:18.246 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.505 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:18.764 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:18.764 08:53:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:19.022 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:19.022 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:19.022 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:19.281 [2024-05-15 08:53:35.413289] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:19.281 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:19.539 2024/05/15 08:53:35 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:21a75d6f-380d-4047-bf33-8f3b360cfebf], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:13:19.539 request: 00:13:19.539 { 00:13:19.539 "method": "bdev_lvol_get_lvstores", 00:13:19.539 "params": { 00:13:19.539 "uuid": "21a75d6f-380d-4047-bf33-8f3b360cfebf" 00:13:19.539 } 00:13:19.539 } 00:13:19.539 Got JSON-RPC error response 00:13:19.539 GoRPCClient: error on JSON-RPC call 00:13:19.798 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:19.798 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:19.798 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:19.798 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:19.798 08:53:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:20.059 aio_bdev 00:13:20.059 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 881af59d-8baf-4597-97a5-ac786a014757 00:13:20.059 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=881af59d-8baf-4597-97a5-ac786a014757 00:13:20.059 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:20.059 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:13:20.059 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:20.059 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:20.059 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:20.318 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 881af59d-8baf-4597-97a5-ac786a014757 -t 2000 00:13:20.577 [ 00:13:20.577 { 00:13:20.577 "aliases": [ 00:13:20.577 "lvs/lvol" 00:13:20.577 ], 00:13:20.577 "assigned_rate_limits": { 00:13:20.577 "r_mbytes_per_sec": 0, 00:13:20.577 "rw_ios_per_sec": 0, 00:13:20.577 "rw_mbytes_per_sec": 0, 00:13:20.577 "w_mbytes_per_sec": 0 00:13:20.577 }, 00:13:20.577 "block_size": 4096, 00:13:20.577 "claimed": false, 00:13:20.577 "driver_specific": { 00:13:20.577 "lvol": { 00:13:20.577 "base_bdev": "aio_bdev", 00:13:20.577 "clone": false, 00:13:20.577 "esnap_clone": false, 00:13:20.577 "lvol_store_uuid": "21a75d6f-380d-4047-bf33-8f3b360cfebf", 00:13:20.577 "num_allocated_clusters": 38, 00:13:20.577 "snapshot": false, 00:13:20.577 "thin_provision": false 00:13:20.577 } 00:13:20.577 }, 00:13:20.577 "name": "881af59d-8baf-4597-97a5-ac786a014757", 00:13:20.577 "num_blocks": 38912, 00:13:20.577 "product_name": "Logical Volume", 00:13:20.577 "supported_io_types": { 00:13:20.577 "abort": false, 00:13:20.577 "compare": false, 00:13:20.577 "compare_and_write": false, 00:13:20.577 "flush": false, 00:13:20.577 "nvme_admin": false, 00:13:20.577 "nvme_io": false, 00:13:20.577 "read": true, 00:13:20.577 "reset": true, 00:13:20.577 "unmap": true, 00:13:20.577 "write": true, 00:13:20.577 "write_zeroes": true 00:13:20.577 }, 00:13:20.577 "uuid": "881af59d-8baf-4597-97a5-ac786a014757", 00:13:20.577 "zoned": false 00:13:20.577 } 00:13:20.577 ] 00:13:20.577 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:13:20.577 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:20.577 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:20.835 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:20.835 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:20.835 08:53:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:21.094 08:53:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:21.094 08:53:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 881af59d-8baf-4597-97a5-ac786a014757 00:13:21.351 08:53:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21a75d6f-380d-4047-bf33-8f3b360cfebf 00:13:21.609 08:53:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:21.868 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:22.435 ************************************ 00:13:22.435 END TEST lvs_grow_clean 00:13:22.435 ************************************ 00:13:22.435 00:13:22.435 real 0m18.309s 00:13:22.435 user 0m17.476s 00:13:22.435 sys 0m2.202s 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:22.435 ************************************ 00:13:22.435 START TEST lvs_grow_dirty 00:13:22.435 ************************************ 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:22.435 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:22.693 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:22.693 08:53:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:22.951 08:53:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:22.951 08:53:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:22.951 08:53:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:23.209 08:53:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:23.209 08:53:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:23.209 08:53:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 lvol 150 00:13:23.469 08:53:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b61586c2-9c7a-41e4-9230-8d5962fe5817 00:13:23.469 08:53:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:23.469 08:53:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:23.727 [2024-05-15 08:53:39.935687] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:23.727 [2024-05-15 08:53:39.935770] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:23.727 true 00:13:23.986 08:53:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:23.986 08:53:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:24.244 08:53:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:24.244 08:53:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:24.502 08:53:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b61586c2-9c7a-41e4-9230-8d5962fe5817 00:13:24.760 08:53:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:25.018 [2024-05-15 08:53:41.064285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.018 08:53:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:25.277 08:53:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73337 00:13:25.277 08:53:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:25.277 08:53:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:25.277 08:53:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73337 /var/tmp/bdevperf.sock 00:13:25.277 08:53:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 73337 ']' 00:13:25.277 08:53:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:25.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:25.277 08:53:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:25.277 08:53:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:25.277 08:53:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:25.277 08:53:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:25.277 [2024-05-15 08:53:41.459386] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:25.277 [2024-05-15 08:53:41.459522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73337 ] 00:13:25.534 [2024-05-15 08:53:41.630270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.534 [2024-05-15 08:53:41.716669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.469 08:53:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:26.469 08:53:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:13:26.469 08:53:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:26.727 Nvme0n1 00:13:26.727 08:53:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:26.985 [ 00:13:26.986 { 00:13:26.986 "aliases": [ 00:13:26.986 "b61586c2-9c7a-41e4-9230-8d5962fe5817" 00:13:26.986 ], 00:13:26.986 "assigned_rate_limits": { 00:13:26.986 "r_mbytes_per_sec": 0, 00:13:26.986 "rw_ios_per_sec": 0, 00:13:26.986 "rw_mbytes_per_sec": 0, 00:13:26.986 "w_mbytes_per_sec": 0 00:13:26.986 }, 00:13:26.986 "block_size": 4096, 00:13:26.986 "claimed": false, 00:13:26.986 "driver_specific": { 00:13:26.986 "mp_policy": "active_passive", 00:13:26.986 "nvme": [ 00:13:26.986 { 00:13:26.986 "ctrlr_data": { 00:13:26.986 "ana_reporting": false, 00:13:26.986 "cntlid": 1, 00:13:26.986 "firmware_revision": "24.05", 00:13:26.986 "model_number": "SPDK bdev Controller", 00:13:26.986 "multi_ctrlr": true, 00:13:26.986 "oacs": { 00:13:26.986 "firmware": 0, 00:13:26.986 "format": 0, 00:13:26.986 "ns_manage": 0, 00:13:26.986 "security": 0 00:13:26.986 }, 00:13:26.986 "serial_number": "SPDK0", 00:13:26.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:26.986 "vendor_id": "0x8086" 00:13:26.986 }, 00:13:26.986 "ns_data": { 00:13:26.986 "can_share": true, 00:13:26.986 "id": 1 00:13:26.986 }, 00:13:26.986 "trid": { 00:13:26.986 "adrfam": "IPv4", 00:13:26.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:26.986 "traddr": "10.0.0.2", 00:13:26.986 "trsvcid": "4420", 00:13:26.986 "trtype": "TCP" 00:13:26.986 }, 00:13:26.986 "vs": { 00:13:26.986 "nvme_version": "1.3" 00:13:26.986 } 00:13:26.986 } 00:13:26.986 ] 00:13:26.986 }, 00:13:26.986 "memory_domains": [ 00:13:26.986 { 00:13:26.986 "dma_device_id": "system", 00:13:26.986 "dma_device_type": 1 00:13:26.986 } 00:13:26.986 ], 00:13:26.986 "name": "Nvme0n1", 00:13:26.986 "num_blocks": 38912, 00:13:26.986 "product_name": "NVMe disk", 00:13:26.986 "supported_io_types": { 00:13:26.986 "abort": true, 00:13:26.986 "compare": true, 00:13:26.986 "compare_and_write": true, 00:13:26.986 "flush": true, 00:13:26.986 "nvme_admin": true, 00:13:26.986 "nvme_io": true, 00:13:26.986 "read": true, 00:13:26.986 "reset": true, 00:13:26.986 "unmap": true, 00:13:26.986 "write": true, 00:13:26.986 "write_zeroes": true 00:13:26.986 }, 00:13:26.986 "uuid": "b61586c2-9c7a-41e4-9230-8d5962fe5817", 00:13:26.986 "zoned": false 00:13:26.986 } 00:13:26.986 ] 00:13:26.986 08:53:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:26.986 08:53:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73390 00:13:26.986 08:53:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:26.986 Running I/O for 10 seconds... 00:13:28.358 Latency(us) 00:13:28.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:28.358 Nvme0n1 : 1.00 7899.00 30.86 0.00 0.00 0.00 0.00 0.00 00:13:28.358 =================================================================================================================== 00:13:28.358 Total : 7899.00 30.86 0.00 0.00 0.00 0.00 0.00 00:13:28.358 00:13:28.926 08:53:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:29.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.211 Nvme0n1 : 2.00 7867.00 30.73 0.00 0.00 0.00 0.00 0.00 00:13:29.211 =================================================================================================================== 00:13:29.211 Total : 7867.00 30.73 0.00 0.00 0.00 0.00 0.00 00:13:29.211 00:13:29.211 true 00:13:29.211 08:53:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:29.211 08:53:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:29.493 08:53:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:29.493 08:53:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:29.493 08:53:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 73390 00:13:30.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:30.061 Nvme0n1 : 3.00 7961.00 31.10 0.00 0.00 0.00 0.00 0.00 00:13:30.061 =================================================================================================================== 00:13:30.061 Total : 7961.00 31.10 0.00 0.00 0.00 0.00 0.00 00:13:30.061 00:13:30.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:30.997 Nvme0n1 : 4.00 7986.50 31.20 0.00 0.00 0.00 0.00 0.00 00:13:30.997 =================================================================================================================== 00:13:30.997 Total : 7986.50 31.20 0.00 0.00 0.00 0.00 0.00 00:13:30.997 00:13:32.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:32.372 Nvme0n1 : 5.00 8004.40 31.27 0.00 0.00 0.00 0.00 0.00 00:13:32.372 =================================================================================================================== 00:13:32.372 Total : 8004.40 31.27 0.00 0.00 0.00 0.00 0.00 00:13:32.372 00:13:33.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.307 Nvme0n1 : 6.00 7957.17 31.08 0.00 0.00 0.00 0.00 0.00 00:13:33.307 =================================================================================================================== 00:13:33.307 Total : 7957.17 31.08 0.00 0.00 0.00 0.00 0.00 00:13:33.307 00:13:34.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.242 Nvme0n1 : 7.00 7913.00 30.91 0.00 0.00 0.00 0.00 0.00 00:13:34.242 =================================================================================================================== 00:13:34.242 Total : 7913.00 30.91 0.00 0.00 0.00 0.00 0.00 00:13:34.242 00:13:35.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.229 Nvme0n1 : 8.00 7879.88 30.78 0.00 0.00 0.00 0.00 0.00 00:13:35.229 =================================================================================================================== 00:13:35.229 Total : 7879.88 30.78 0.00 0.00 0.00 0.00 0.00 00:13:35.229 00:13:36.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.163 Nvme0n1 : 9.00 7863.56 30.72 0.00 0.00 0.00 0.00 0.00 00:13:36.163 =================================================================================================================== 00:13:36.163 Total : 7863.56 30.72 0.00 0.00 0.00 0.00 0.00 00:13:36.163 00:13:37.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.095 Nvme0n1 : 10.00 7848.50 30.66 0.00 0.00 0.00 0.00 0.00 00:13:37.095 =================================================================================================================== 00:13:37.095 Total : 7848.50 30.66 0.00 0.00 0.00 0.00 0.00 00:13:37.095 00:13:37.095 00:13:37.095 Latency(us) 00:13:37.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.095 Nvme0n1 : 10.02 7849.09 30.66 0.00 0.00 16301.46 2785.28 73400.32 00:13:37.095 =================================================================================================================== 00:13:37.095 Total : 7849.09 30.66 0.00 0.00 16301.46 2785.28 73400.32 00:13:37.095 0 00:13:37.095 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73337 00:13:37.095 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 73337 ']' 00:13:37.095 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 73337 00:13:37.095 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:13:37.095 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:37.095 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73337 00:13:37.095 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:37.095 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:37.095 killing process with pid 73337 00:13:37.095 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73337' 00:13:37.095 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 73337 00:13:37.095 Received shutdown signal, test time was about 10.000000 seconds 00:13:37.095 00:13:37.095 Latency(us) 00:13:37.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.095 =================================================================================================================== 00:13:37.095 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:37.095 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 73337 00:13:37.352 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:37.611 08:53:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:37.868 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:37.868 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:38.126 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:38.126 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:38.126 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 72735 00:13:38.126 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 72735 00:13:38.384 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 72735 Killed "${NVMF_APP[@]}" "$@" 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=73553 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 73553 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 73553 ']' 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:38.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:38.384 08:53:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:38.384 [2024-05-15 08:53:54.444823] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:38.384 [2024-05-15 08:53:54.444917] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.384 [2024-05-15 08:53:54.583331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.641 [2024-05-15 08:53:54.642157] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.641 [2024-05-15 08:53:54.642211] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.641 [2024-05-15 08:53:54.642223] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.641 [2024-05-15 08:53:54.642232] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.641 [2024-05-15 08:53:54.642239] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.641 [2024-05-15 08:53:54.642263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.206 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:39.206 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:13:39.206 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.206 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.206 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:39.464 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.464 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:39.464 [2024-05-15 08:53:55.673155] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:39.464 [2024-05-15 08:53:55.673441] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:39.464 [2024-05-15 08:53:55.673615] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:39.722 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:39.722 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b61586c2-9c7a-41e4-9230-8d5962fe5817 00:13:39.722 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=b61586c2-9c7a-41e4-9230-8d5962fe5817 00:13:39.722 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:39.722 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:13:39.722 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:39.722 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:39.722 08:53:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:39.981 08:53:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b61586c2-9c7a-41e4-9230-8d5962fe5817 -t 2000 00:13:40.274 [ 00:13:40.274 { 00:13:40.274 "aliases": [ 00:13:40.274 "lvs/lvol" 00:13:40.274 ], 00:13:40.274 "assigned_rate_limits": { 00:13:40.274 "r_mbytes_per_sec": 0, 00:13:40.274 "rw_ios_per_sec": 0, 00:13:40.274 "rw_mbytes_per_sec": 0, 00:13:40.274 "w_mbytes_per_sec": 0 00:13:40.274 }, 00:13:40.274 "block_size": 4096, 00:13:40.274 "claimed": false, 00:13:40.274 "driver_specific": { 00:13:40.274 "lvol": { 00:13:40.274 "base_bdev": "aio_bdev", 00:13:40.274 "clone": false, 00:13:40.274 "esnap_clone": false, 00:13:40.274 "lvol_store_uuid": "0c78e1e3-179c-4181-8290-b37bf4237cf8", 00:13:40.274 "num_allocated_clusters": 38, 00:13:40.274 "snapshot": false, 00:13:40.274 "thin_provision": false 00:13:40.274 } 00:13:40.274 }, 00:13:40.274 "name": "b61586c2-9c7a-41e4-9230-8d5962fe5817", 00:13:40.274 "num_blocks": 38912, 00:13:40.274 "product_name": "Logical Volume", 00:13:40.274 "supported_io_types": { 00:13:40.274 "abort": false, 00:13:40.274 "compare": false, 00:13:40.274 "compare_and_write": false, 00:13:40.274 "flush": false, 00:13:40.274 "nvme_admin": false, 00:13:40.274 "nvme_io": false, 00:13:40.274 "read": true, 00:13:40.274 "reset": true, 00:13:40.274 "unmap": true, 00:13:40.274 "write": true, 00:13:40.274 "write_zeroes": true 00:13:40.274 }, 00:13:40.274 "uuid": "b61586c2-9c7a-41e4-9230-8d5962fe5817", 00:13:40.274 "zoned": false 00:13:40.274 } 00:13:40.274 ] 00:13:40.274 08:53:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:13:40.274 08:53:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:40.274 08:53:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:40.532 08:53:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:40.532 08:53:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:40.532 08:53:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:40.790 08:53:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:40.790 08:53:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:41.048 [2024-05-15 08:53:57.111043] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:41.048 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:41.307 2024/05/15 08:53:57 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:0c78e1e3-179c-4181-8290-b37bf4237cf8], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:13:41.307 request: 00:13:41.307 { 00:13:41.307 "method": "bdev_lvol_get_lvstores", 00:13:41.307 "params": { 00:13:41.307 "uuid": "0c78e1e3-179c-4181-8290-b37bf4237cf8" 00:13:41.307 } 00:13:41.307 } 00:13:41.307 Got JSON-RPC error response 00:13:41.307 GoRPCClient: error on JSON-RPC call 00:13:41.307 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:41.307 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:41.307 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:41.307 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:41.307 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:41.567 aio_bdev 00:13:41.567 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b61586c2-9c7a-41e4-9230-8d5962fe5817 00:13:41.567 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=b61586c2-9c7a-41e4-9230-8d5962fe5817 00:13:41.567 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:41.567 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:13:41.567 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:41.567 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:41.567 08:53:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:41.825 08:53:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b61586c2-9c7a-41e4-9230-8d5962fe5817 -t 2000 00:13:42.084 [ 00:13:42.084 { 00:13:42.084 "aliases": [ 00:13:42.084 "lvs/lvol" 00:13:42.084 ], 00:13:42.084 "assigned_rate_limits": { 00:13:42.084 "r_mbytes_per_sec": 0, 00:13:42.084 "rw_ios_per_sec": 0, 00:13:42.084 "rw_mbytes_per_sec": 0, 00:13:42.084 "w_mbytes_per_sec": 0 00:13:42.084 }, 00:13:42.084 "block_size": 4096, 00:13:42.084 "claimed": false, 00:13:42.084 "driver_specific": { 00:13:42.084 "lvol": { 00:13:42.084 "base_bdev": "aio_bdev", 00:13:42.084 "clone": false, 00:13:42.084 "esnap_clone": false, 00:13:42.084 "lvol_store_uuid": "0c78e1e3-179c-4181-8290-b37bf4237cf8", 00:13:42.084 "num_allocated_clusters": 38, 00:13:42.084 "snapshot": false, 00:13:42.084 "thin_provision": false 00:13:42.084 } 00:13:42.084 }, 00:13:42.084 "name": "b61586c2-9c7a-41e4-9230-8d5962fe5817", 00:13:42.084 "num_blocks": 38912, 00:13:42.084 "product_name": "Logical Volume", 00:13:42.084 "supported_io_types": { 00:13:42.084 "abort": false, 00:13:42.084 "compare": false, 00:13:42.084 "compare_and_write": false, 00:13:42.084 "flush": false, 00:13:42.084 "nvme_admin": false, 00:13:42.084 "nvme_io": false, 00:13:42.084 "read": true, 00:13:42.084 "reset": true, 00:13:42.084 "unmap": true, 00:13:42.084 "write": true, 00:13:42.084 "write_zeroes": true 00:13:42.084 }, 00:13:42.084 "uuid": "b61586c2-9c7a-41e4-9230-8d5962fe5817", 00:13:42.084 "zoned": false 00:13:42.084 } 00:13:42.084 ] 00:13:42.084 08:53:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:13:42.084 08:53:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:42.084 08:53:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:42.651 08:53:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:42.651 08:53:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:42.651 08:53:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:42.651 08:53:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:42.651 08:53:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b61586c2-9c7a-41e4-9230-8d5962fe5817 00:13:43.218 08:53:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c78e1e3-179c-4181-8290-b37bf4237cf8 00:13:43.476 08:53:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:43.734 08:53:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:43.992 00:13:43.992 real 0m21.608s 00:13:43.992 user 0m44.638s 00:13:43.992 sys 0m7.861s 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:43.992 ************************************ 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 END TEST lvs_grow_dirty 00:13:43.992 ************************************ 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:43.992 nvmf_trace.0 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.992 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.251 rmmod nvme_tcp 00:13:44.251 rmmod nvme_fabrics 00:13:44.251 rmmod nvme_keyring 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 73553 ']' 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 73553 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 73553 ']' 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 73553 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73553 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:44.251 killing process with pid 73553 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73553' 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 73553 00:13:44.251 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 73553 00:13:44.511 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.511 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.511 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.511 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.511 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.511 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.511 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.511 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.511 08:54:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:44.511 00:13:44.511 real 0m42.341s 00:13:44.511 user 1m8.919s 00:13:44.511 sys 0m10.695s 00:13:44.511 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.511 08:54:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.511 ************************************ 00:13:44.511 END TEST nvmf_lvs_grow 00:13:44.511 ************************************ 00:13:44.511 08:54:00 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:44.511 08:54:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:44.511 08:54:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:44.511 08:54:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:44.511 ************************************ 00:13:44.511 START TEST nvmf_bdev_io_wait 00:13:44.511 ************************************ 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:44.511 * Looking for test storage... 00:13:44.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.511 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.512 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:44.512 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.512 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:44.512 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.512 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.512 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.512 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.512 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.512 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.512 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.512 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:44.770 Cannot find device "nvmf_tgt_br" 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.770 Cannot find device "nvmf_tgt_br2" 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:44.770 Cannot find device "nvmf_tgt_br" 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:44.770 Cannot find device "nvmf_tgt_br2" 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:44.770 08:54:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:45.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:13:45.028 00:13:45.028 --- 10.0.0.2 ping statistics --- 00:13:45.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.028 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:45.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:45.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:45.028 00:13:45.028 --- 10.0.0.3 ping statistics --- 00:13:45.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.028 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:45.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:13:45.028 00:13:45.028 --- 10.0.0.1 ping statistics --- 00:13:45.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.028 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=73974 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 73974 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 73974 ']' 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:45.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:45.028 08:54:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:45.028 [2024-05-15 08:54:01.171208] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:45.028 [2024-05-15 08:54:01.171311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.285 [2024-05-15 08:54:01.313398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.285 [2024-05-15 08:54:01.383455] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.285 [2024-05-15 08:54:01.383523] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.285 [2024-05-15 08:54:01.383538] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.285 [2024-05-15 08:54:01.383548] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.285 [2024-05-15 08:54:01.383557] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.285 [2024-05-15 08:54:01.383693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.285 [2024-05-15 08:54:01.383812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.286 [2024-05-15 08:54:01.384413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.286 [2024-05-15 08:54:01.384432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.220 [2024-05-15 08:54:02.188831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.220 Malloc0 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.220 [2024-05-15 08:54:02.236711] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:46.220 [2024-05-15 08:54:02.236985] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74027 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74029 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:46.220 { 00:13:46.220 "params": { 00:13:46.220 "name": "Nvme$subsystem", 00:13:46.220 "trtype": "$TEST_TRANSPORT", 00:13:46.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.220 "adrfam": "ipv4", 00:13:46.220 "trsvcid": "$NVMF_PORT", 00:13:46.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.220 "hdgst": ${hdgst:-false}, 00:13:46.220 "ddgst": ${ddgst:-false} 00:13:46.220 }, 00:13:46.220 "method": "bdev_nvme_attach_controller" 00:13:46.220 } 00:13:46.220 EOF 00:13:46.220 )") 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74031 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:46.220 { 00:13:46.220 "params": { 00:13:46.220 "name": "Nvme$subsystem", 00:13:46.220 "trtype": "$TEST_TRANSPORT", 00:13:46.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.220 "adrfam": "ipv4", 00:13:46.220 "trsvcid": "$NVMF_PORT", 00:13:46.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.220 "hdgst": ${hdgst:-false}, 00:13:46.220 "ddgst": ${ddgst:-false} 00:13:46.220 }, 00:13:46.220 "method": "bdev_nvme_attach_controller" 00:13:46.220 } 00:13:46.220 EOF 00:13:46.220 )") 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:46.220 { 00:13:46.220 "params": { 00:13:46.220 "name": "Nvme$subsystem", 00:13:46.220 "trtype": "$TEST_TRANSPORT", 00:13:46.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.220 "adrfam": "ipv4", 00:13:46.220 "trsvcid": "$NVMF_PORT", 00:13:46.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.220 "hdgst": ${hdgst:-false}, 00:13:46.220 "ddgst": ${ddgst:-false} 00:13:46.220 }, 00:13:46.220 "method": "bdev_nvme_attach_controller" 00:13:46.220 } 00:13:46.220 EOF 00:13:46.220 )") 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74038 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:46.220 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:46.221 { 00:13:46.221 "params": { 00:13:46.221 "name": "Nvme$subsystem", 00:13:46.221 "trtype": "$TEST_TRANSPORT", 00:13:46.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.221 "adrfam": "ipv4", 00:13:46.221 "trsvcid": "$NVMF_PORT", 00:13:46.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.221 "hdgst": ${hdgst:-false}, 00:13:46.221 "ddgst": ${ddgst:-false} 00:13:46.221 }, 00:13:46.221 "method": "bdev_nvme_attach_controller" 00:13:46.221 } 00:13:46.221 EOF 00:13:46.221 )") 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:46.221 "params": { 00:13:46.221 "name": "Nvme1", 00:13:46.221 "trtype": "tcp", 00:13:46.221 "traddr": "10.0.0.2", 00:13:46.221 "adrfam": "ipv4", 00:13:46.221 "trsvcid": "4420", 00:13:46.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.221 "hdgst": false, 00:13:46.221 "ddgst": false 00:13:46.221 }, 00:13:46.221 "method": "bdev_nvme_attach_controller" 00:13:46.221 }' 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:46.221 "params": { 00:13:46.221 "name": "Nvme1", 00:13:46.221 "trtype": "tcp", 00:13:46.221 "traddr": "10.0.0.2", 00:13:46.221 "adrfam": "ipv4", 00:13:46.221 "trsvcid": "4420", 00:13:46.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.221 "hdgst": false, 00:13:46.221 "ddgst": false 00:13:46.221 }, 00:13:46.221 "method": "bdev_nvme_attach_controller" 00:13:46.221 }' 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:46.221 "params": { 00:13:46.221 "name": "Nvme1", 00:13:46.221 "trtype": "tcp", 00:13:46.221 "traddr": "10.0.0.2", 00:13:46.221 "adrfam": "ipv4", 00:13:46.221 "trsvcid": "4420", 00:13:46.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.221 "hdgst": false, 00:13:46.221 "ddgst": false 00:13:46.221 }, 00:13:46.221 "method": "bdev_nvme_attach_controller" 00:13:46.221 }' 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:46.221 "params": { 00:13:46.221 "name": "Nvme1", 00:13:46.221 "trtype": "tcp", 00:13:46.221 "traddr": "10.0.0.2", 00:13:46.221 "adrfam": "ipv4", 00:13:46.221 "trsvcid": "4420", 00:13:46.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.221 "hdgst": false, 00:13:46.221 "ddgst": false 00:13:46.221 }, 00:13:46.221 "method": "bdev_nvme_attach_controller" 00:13:46.221 }' 00:13:46.221 08:54:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74027 00:13:46.221 [2024-05-15 08:54:02.293433] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:46.221 [2024-05-15 08:54:02.294076] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:46.221 [2024-05-15 08:54:02.323485] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:46.221 [2024-05-15 08:54:02.323580] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:46.221 [2024-05-15 08:54:02.331119] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:46.221 [2024-05-15 08:54:02.331214] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:46.221 [2024-05-15 08:54:02.358474] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:46.221 [2024-05-15 08:54:02.358603] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:46.479 [2024-05-15 08:54:02.471395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.479 [2024-05-15 08:54:02.509462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.479 [2024-05-15 08:54:02.518384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:46.479 [2024-05-15 08:54:02.564595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:46.479 [2024-05-15 08:54:02.574504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.479 [2024-05-15 08:54:02.618122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.479 Running I/O for 1 seconds... 00:13:46.479 [2024-05-15 08:54:02.642950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:46.479 [2024-05-15 08:54:02.689370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:13:46.479 Running I/O for 1 seconds... 00:13:46.737 Running I/O for 1 seconds... 00:13:46.737 Running I/O for 1 seconds... 00:13:47.670 00:13:47.670 Latency(us) 00:13:47.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.670 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:47.670 Nvme1n1 : 1.02 6439.76 25.16 0.00 0.00 19634.66 9532.51 41228.10 00:13:47.670 =================================================================================================================== 00:13:47.670 Total : 6439.76 25.16 0.00 0.00 19634.66 9532.51 41228.10 00:13:47.670 00:13:47.670 Latency(us) 00:13:47.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.670 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:47.670 Nvme1n1 : 1.01 7356.43 28.74 0.00 0.00 17297.65 10724.07 34555.35 00:13:47.670 =================================================================================================================== 00:13:47.670 Total : 7356.43 28.74 0.00 0.00 17297.65 10724.07 34555.35 00:13:47.670 00:13:47.670 Latency(us) 00:13:47.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.670 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:47.670 Nvme1n1 : 1.00 57625.94 225.10 0.00 0.00 2210.82 837.82 3157.64 00:13:47.670 =================================================================================================================== 00:13:47.670 Total : 57625.94 225.10 0.00 0.00 2210.82 837.82 3157.64 00:13:47.670 08:54:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74029 00:13:47.670 00:13:47.670 Latency(us) 00:13:47.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.670 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:47.670 Nvme1n1 : 1.01 7044.84 27.52 0.00 0.00 18105.02 6494.02 46947.61 00:13:47.670 =================================================================================================================== 00:13:47.670 Total : 7044.84 27.52 0.00 0.00 18105.02 6494.02 46947.61 00:13:47.928 08:54:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74031 00:13:47.928 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74038 00:13:47.928 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.928 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.928 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:47.928 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.928 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:47.928 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:47.928 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.928 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.186 rmmod nvme_tcp 00:13:48.186 rmmod nvme_fabrics 00:13:48.186 rmmod nvme_keyring 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 73974 ']' 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 73974 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 73974 ']' 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 73974 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73974 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73974' 00:13:48.186 killing process with pid 73974 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 73974 00:13:48.186 [2024-05-15 08:54:04.246644] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:48.186 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 73974 00:13:48.444 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.444 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.444 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.444 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.444 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.444 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.444 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.444 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.444 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:48.444 00:13:48.444 real 0m3.813s 00:13:48.444 user 0m17.074s 00:13:48.444 sys 0m1.714s 00:13:48.444 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:48.444 08:54:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:48.444 ************************************ 00:13:48.444 END TEST nvmf_bdev_io_wait 00:13:48.444 ************************************ 00:13:48.444 08:54:04 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:48.444 08:54:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:48.444 08:54:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.444 08:54:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:48.444 ************************************ 00:13:48.444 START TEST nvmf_queue_depth 00:13:48.444 ************************************ 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:48.444 * Looking for test storage... 00:13:48.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.444 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:48.445 Cannot find device "nvmf_tgt_br" 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.445 Cannot find device "nvmf_tgt_br2" 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:48.445 Cannot find device "nvmf_tgt_br" 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:48.445 Cannot find device "nvmf_tgt_br2" 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:13:48.445 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:48.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:13:48.703 00:13:48.703 --- 10.0.0.2 ping statistics --- 00:13:48.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.703 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:48.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:48.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:13:48.703 00:13:48.703 --- 10.0.0.3 ping statistics --- 00:13:48.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.703 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:48.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:48.703 00:13:48.703 --- 10.0.0.1 ping statistics --- 00:13:48.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.703 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:48.703 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:48.704 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.704 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:48.704 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.962 08:54:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=74264 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 74264 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 74264 ']' 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:48.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:48.963 08:54:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:48.963 [2024-05-15 08:54:05.004665] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:48.963 [2024-05-15 08:54:05.004772] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.963 [2024-05-15 08:54:05.138853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.221 [2024-05-15 08:54:05.198381] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.221 [2024-05-15 08:54:05.198448] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.221 [2024-05-15 08:54:05.198468] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.221 [2024-05-15 08:54:05.198481] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.221 [2024-05-15 08:54:05.198491] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.221 [2024-05-15 08:54:05.198527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.221 [2024-05-15 08:54:05.322460] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.221 Malloc0 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.221 [2024-05-15 08:54:05.382759] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:49.221 [2024-05-15 08:54:05.383044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=74306 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 74306 /var/tmp/bdevperf.sock 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 74306 ']' 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:49.221 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.221 [2024-05-15 08:54:05.441930] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:49.221 [2024-05-15 08:54:05.442029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74306 ] 00:13:49.480 [2024-05-15 08:54:05.583941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.480 [2024-05-15 08:54:05.654543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.738 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:49.738 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:13:49.738 08:54:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:49.738 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.738 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:49.738 NVMe0n1 00:13:49.738 08:54:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.738 08:54:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:49.738 Running I/O for 10 seconds... 00:14:01.955 00:14:01.955 Latency(us) 00:14:01.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.955 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:01.955 Verification LBA range: start 0x0 length 0x4000 00:14:01.955 NVMe0n1 : 10.07 8251.68 32.23 0.00 0.00 123516.61 11915.64 116773.24 00:14:01.955 =================================================================================================================== 00:14:01.955 Total : 8251.68 32.23 0.00 0.00 123516.61 11915.64 116773.24 00:14:01.955 0 00:14:01.955 08:54:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 74306 00:14:01.955 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 74306 ']' 00:14:01.955 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 74306 00:14:01.955 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:14:01.955 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:01.955 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74306 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:01.956 killing process with pid 74306 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74306' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 74306 00:14:01.956 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.956 00:14:01.956 Latency(us) 00:14:01.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.956 =================================================================================================================== 00:14:01.956 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 74306 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:01.956 rmmod nvme_tcp 00:14:01.956 rmmod nvme_fabrics 00:14:01.956 rmmod nvme_keyring 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 74264 ']' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 74264 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 74264 ']' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 74264 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74264 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:01.956 killing process with pid 74264 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74264' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 74264 00:14:01.956 [2024-05-15 08:54:16.343035] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 74264 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:01.956 00:14:01.956 real 0m12.075s 00:14:01.956 user 0m20.889s 00:14:01.956 sys 0m1.915s 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:01.956 08:54:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:01.956 ************************************ 00:14:01.956 END TEST nvmf_queue_depth 00:14:01.956 ************************************ 00:14:01.956 08:54:16 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:01.956 08:54:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:01.956 08:54:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:01.956 08:54:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:01.956 ************************************ 00:14:01.956 START TEST nvmf_target_multipath 00:14:01.956 ************************************ 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:01.956 * Looking for test storage... 00:14:01.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.956 08:54:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:01.957 Cannot find device "nvmf_tgt_br" 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:01.957 Cannot find device "nvmf_tgt_br2" 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:01.957 Cannot find device "nvmf_tgt_br" 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:01.957 Cannot find device "nvmf_tgt_br2" 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:01.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:01.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:01.957 08:54:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:01.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:14:01.957 00:14:01.957 --- 10.0.0.2 ping statistics --- 00:14:01.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.957 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:01.957 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:01.957 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:14:01.957 00:14:01.957 --- 10.0.0.3 ping statistics --- 00:14:01.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.957 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:01.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:01.957 00:14:01.957 --- 10.0.0.1 ping statistics --- 00:14:01.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.957 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=74622 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 74622 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 74622 ']' 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:01.957 [2024-05-15 08:54:17.175027] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:01.957 [2024-05-15 08:54:17.175121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.957 [2024-05-15 08:54:17.323786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.957 [2024-05-15 08:54:17.405261] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.957 [2024-05-15 08:54:17.405340] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.957 [2024-05-15 08:54:17.405355] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.957 [2024-05-15 08:54:17.405365] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.957 [2024-05-15 08:54:17.405374] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.957 [2024-05-15 08:54:17.405474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.957 [2024-05-15 08:54:17.405622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.957 [2024-05-15 08:54:17.406227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.957 [2024-05-15 08:54:17.406235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:01.957 08:54:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:01.958 08:54:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.958 08:54:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:01.958 [2024-05-15 08:54:17.812670] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.958 08:54:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:01.958 Malloc0 00:14:01.958 08:54:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:14:02.524 08:54:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:02.782 08:54:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.040 [2024-05-15 08:54:19.032243] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:03.040 [2024-05-15 08:54:19.032506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.040 08:54:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:03.299 [2024-05-15 08:54:19.304758] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:03.299 08:54:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:14:03.299 08:54:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:14:03.559 08:54:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:14:03.559 08:54:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:14:03.559 08:54:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:03.559 08:54:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:03.559 08:54:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:14:06.092 08:54:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:06.092 08:54:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:06.092 08:54:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.092 08:54:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:06.092 08:54:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.092 08:54:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:14:06.092 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=74755 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:06.093 08:54:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:14:06.093 [global] 00:14:06.093 thread=1 00:14:06.093 invalidate=1 00:14:06.093 rw=randrw 00:14:06.093 time_based=1 00:14:06.093 runtime=6 00:14:06.093 ioengine=libaio 00:14:06.093 direct=1 00:14:06.093 bs=4096 00:14:06.093 iodepth=128 00:14:06.093 norandommap=0 00:14:06.093 numjobs=1 00:14:06.093 00:14:06.093 verify_dump=1 00:14:06.093 verify_backlog=512 00:14:06.093 verify_state_save=0 00:14:06.093 do_verify=1 00:14:06.093 verify=crc32c-intel 00:14:06.093 [job0] 00:14:06.093 filename=/dev/nvme0n1 00:14:06.093 Could not set queue depth (nvme0n1) 00:14:06.093 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:06.093 fio-3.35 00:14:06.093 Starting 1 thread 00:14:06.663 08:54:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:06.921 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:07.179 08:54:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:08.111 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:08.111 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:08.111 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:08.111 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:08.369 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:08.936 08:54:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:09.870 08:54:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:09.870 08:54:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:09.870 08:54:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:09.870 08:54:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 74755 00:14:12.402 00:14:12.402 job0: (groupid=0, jobs=1): err= 0: pid=74777: Wed May 15 08:54:28 2024 00:14:12.402 read: IOPS=10.9k, BW=42.4MiB/s (44.5MB/s)(255MiB/6007msec) 00:14:12.402 slat (usec): min=4, max=4709, avg=52.27, stdev=232.17 00:14:12.402 clat (usec): min=720, max=14499, avg=8021.17, stdev=1212.84 00:14:12.402 lat (usec): min=764, max=14509, avg=8073.44, stdev=1222.13 00:14:12.402 clat percentiles (usec): 00:14:12.402 | 1.00th=[ 4752], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7242], 00:14:12.402 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8225], 00:14:12.402 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10028], 00:14:12.402 | 99.00th=[11731], 99.50th=[12125], 99.90th=[12780], 99.95th=[13042], 00:14:12.402 | 99.99th=[13829] 00:14:12.402 bw ( KiB/s): min=10416, max=27912, per=52.78%, avg=22919.27, stdev=5896.91, samples=11 00:14:12.402 iops : min= 2604, max= 6976, avg=5729.82, stdev=1474.14, samples=11 00:14:12.402 write: IOPS=6344, BW=24.8MiB/s (26.0MB/s)(135MiB/5463msec); 0 zone resets 00:14:12.402 slat (usec): min=6, max=3303, avg=64.18, stdev=156.58 00:14:12.402 clat (usec): min=736, max=14225, avg=6880.63, stdev=1025.93 00:14:12.402 lat (usec): min=782, max=14249, avg=6944.81, stdev=1028.91 00:14:12.402 clat percentiles (usec): 00:14:12.402 | 1.00th=[ 3720], 5.00th=[ 4948], 10.00th=[ 5800], 20.00th=[ 6325], 00:14:12.402 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:14:12.402 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7832], 95.00th=[ 8094], 00:14:12.402 | 99.00th=[ 9896], 99.50th=[10683], 99.90th=[12125], 99.95th=[12518], 00:14:12.402 | 99.99th=[12780] 00:14:12.402 bw ( KiB/s): min=10600, max=27904, per=90.30%, avg=22914.91, stdev=5663.11, samples=11 00:14:12.402 iops : min= 2650, max= 6976, avg=5728.73, stdev=1415.78, samples=11 00:14:12.402 lat (usec) : 750=0.01%, 1000=0.01% 00:14:12.402 lat (msec) : 2=0.06%, 4=0.71%, 10=95.63%, 20=3.59% 00:14:12.402 cpu : usr=5.76%, sys=24.38%, ctx=6430, majf=0, minf=84 00:14:12.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:12.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:12.402 issued rwts: total=65210,34658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:12.402 00:14:12.402 Run status group 0 (all jobs): 00:14:12.402 READ: bw=42.4MiB/s (44.5MB/s), 42.4MiB/s-42.4MiB/s (44.5MB/s-44.5MB/s), io=255MiB (267MB), run=6007-6007msec 00:14:12.402 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=135MiB (142MB), run=5463-5463msec 00:14:12.402 00:14:12.402 Disk stats (read/write): 00:14:12.402 nvme0n1: ios=64290/34022, merge=0/0, ticks=481246/218178, in_queue=699424, util=98.65% 00:14:12.402 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:14:12.402 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:14:12.660 08:54:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:13.594 08:54:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:13.594 08:54:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:13.594 08:54:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:13.594 08:54:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:14:13.594 08:54:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=74910 00:14:13.594 08:54:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:13.594 08:54:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:14:13.594 [global] 00:14:13.594 thread=1 00:14:13.594 invalidate=1 00:14:13.594 rw=randrw 00:14:13.594 time_based=1 00:14:13.594 runtime=6 00:14:13.594 ioengine=libaio 00:14:13.594 direct=1 00:14:13.594 bs=4096 00:14:13.594 iodepth=128 00:14:13.594 norandommap=0 00:14:13.594 numjobs=1 00:14:13.595 00:14:13.595 verify_dump=1 00:14:13.595 verify_backlog=512 00:14:13.595 verify_state_save=0 00:14:13.595 do_verify=1 00:14:13.595 verify=crc32c-intel 00:14:13.595 [job0] 00:14:13.595 filename=/dev/nvme0n1 00:14:13.595 Could not set queue depth (nvme0n1) 00:14:13.595 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:13.595 fio-3.35 00:14:13.595 Starting 1 thread 00:14:14.541 08:54:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:14.799 08:54:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:15.056 08:54:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:16.454 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:16.454 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:16.454 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:16.454 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:16.454 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:16.785 08:54:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:14:17.773 08:54:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:14:17.773 08:54:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:17.773 08:54:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:17.773 08:54:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 74910 00:14:20.309 00:14:20.309 job0: (groupid=0, jobs=1): err= 0: pid=74931: Wed May 15 08:54:35 2024 00:14:20.309 read: IOPS=11.2k, BW=43.9MiB/s (46.0MB/s)(264MiB/6006msec) 00:14:20.309 slat (usec): min=4, max=5499, avg=44.62, stdev=212.66 00:14:20.309 clat (usec): min=275, max=22454, avg=7767.31, stdev=2265.52 00:14:20.309 lat (usec): min=299, max=22471, avg=7811.92, stdev=2280.11 00:14:20.309 clat percentiles (usec): 00:14:20.309 | 1.00th=[ 1467], 5.00th=[ 3654], 10.00th=[ 4883], 20.00th=[ 6325], 00:14:20.309 | 30.00th=[ 7242], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8160], 00:14:20.309 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[10290], 95.00th=[11338], 00:14:20.309 | 99.00th=[13698], 99.50th=[15139], 99.90th=[18220], 99.95th=[19530], 00:14:20.309 | 99.99th=[21890] 00:14:20.309 bw ( KiB/s): min= 9472, max=36064, per=53.37%, avg=23976.73, stdev=8625.36, samples=11 00:14:20.309 iops : min= 2368, max= 9016, avg=5994.18, stdev=2156.34, samples=11 00:14:20.309 write: IOPS=6781, BW=26.5MiB/s (27.8MB/s)(142MiB/5369msec); 0 zone resets 00:14:20.309 slat (usec): min=12, max=2080, avg=58.01, stdev=139.53 00:14:20.309 clat (usec): min=241, max=20199, avg=6529.84, stdev=2145.31 00:14:20.309 lat (usec): min=271, max=20226, avg=6587.85, stdev=2157.28 00:14:20.309 clat percentiles (usec): 00:14:20.309 | 1.00th=[ 1106], 5.00th=[ 2868], 10.00th=[ 3621], 20.00th=[ 4621], 00:14:20.309 | 30.00th=[ 5669], 40.00th=[ 6456], 50.00th=[ 6849], 60.00th=[ 7177], 00:14:20.309 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8848], 95.00th=[ 9634], 00:14:20.309 | 99.00th=[12125], 99.50th=[13304], 99.90th=[15795], 99.95th=[17171], 00:14:20.309 | 99.99th=[20055] 00:14:20.309 bw ( KiB/s): min= 9784, max=36864, per=88.49%, avg=24003.64, stdev=8455.37, samples=11 00:14:20.309 iops : min= 2446, max= 9216, avg=6000.91, stdev=2113.84, samples=11 00:14:20.309 lat (usec) : 250=0.01%, 500=0.05%, 750=0.15%, 1000=0.32% 00:14:20.309 lat (msec) : 2=1.83%, 4=6.19%, 10=82.40%, 20=9.03%, 50=0.03% 00:14:20.309 cpu : usr=6.18%, sys=26.33%, ctx=7875, majf=0, minf=121 00:14:20.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:20.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:20.309 issued rwts: total=67457,36409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:20.309 00:14:20.309 Run status group 0 (all jobs): 00:14:20.309 READ: bw=43.9MiB/s (46.0MB/s), 43.9MiB/s-43.9MiB/s (46.0MB/s-46.0MB/s), io=264MiB (276MB), run=6006-6006msec 00:14:20.309 WRITE: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=142MiB (149MB), run=5369-5369msec 00:14:20.309 00:14:20.309 Disk stats (read/write): 00:14:20.309 nvme0n1: ios=66739/35524, merge=0/0, ticks=480293/210342, in_queue=690635, util=98.62% 00:14:20.309 08:54:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.310 rmmod nvme_tcp 00:14:20.310 rmmod nvme_fabrics 00:14:20.310 rmmod nvme_keyring 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 74622 ']' 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 74622 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 74622 ']' 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 74622 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74622 00:14:20.310 killing process with pid 74622 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74622' 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 74622 00:14:20.310 [2024-05-15 08:54:36.430697] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:20.310 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 74622 00:14:20.568 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.568 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.568 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.568 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.568 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.568 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.568 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.568 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.568 08:54:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:20.568 00:14:20.568 real 0m20.056s 00:14:20.568 user 1m18.729s 00:14:20.568 sys 0m6.645s 00:14:20.568 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:20.568 08:54:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:20.568 ************************************ 00:14:20.568 END TEST nvmf_target_multipath 00:14:20.568 ************************************ 00:14:20.568 08:54:36 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:20.568 08:54:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:20.568 08:54:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:20.568 08:54:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.568 ************************************ 00:14:20.568 START TEST nvmf_zcopy 00:14:20.568 ************************************ 00:14:20.568 08:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:20.827 * Looking for test storage... 00:14:20.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:20.827 Cannot find device "nvmf_tgt_br" 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:20.827 Cannot find device "nvmf_tgt_br2" 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:20.827 Cannot find device "nvmf_tgt_br" 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:20.827 Cannot find device "nvmf_tgt_br2" 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:20.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:20.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.827 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:14:20.828 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:20.828 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:20.828 08:54:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:20.828 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:20.828 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:21.086 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:21.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:21.087 00:14:21.087 --- 10.0.0.2 ping statistics --- 00:14:21.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.087 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:21.087 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:21.087 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:14:21.087 00:14:21.087 --- 10.0.0.3 ping statistics --- 00:14:21.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.087 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:21.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:21.087 00:14:21.087 --- 10.0.0.1 ping statistics --- 00:14:21.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.087 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=75208 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 75208 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 75208 ']' 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:21.087 08:54:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.087 [2024-05-15 08:54:37.306100] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:21.087 [2024-05-15 08:54:37.306193] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.345 [2024-05-15 08:54:37.444187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.345 [2024-05-15 08:54:37.505178] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.345 [2024-05-15 08:54:37.505237] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.345 [2024-05-15 08:54:37.505249] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.345 [2024-05-15 08:54:37.505258] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.345 [2024-05-15 08:54:37.505265] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.345 [2024-05-15 08:54:37.505297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.280 [2024-05-15 08:54:38.370123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.280 [2024-05-15 08:54:38.386027] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:22.280 [2024-05-15 08:54:38.386410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.280 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.281 malloc0 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:22.281 { 00:14:22.281 "params": { 00:14:22.281 "name": "Nvme$subsystem", 00:14:22.281 "trtype": "$TEST_TRANSPORT", 00:14:22.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:22.281 "adrfam": "ipv4", 00:14:22.281 "trsvcid": "$NVMF_PORT", 00:14:22.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:22.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:22.281 "hdgst": ${hdgst:-false}, 00:14:22.281 "ddgst": ${ddgst:-false} 00:14:22.281 }, 00:14:22.281 "method": "bdev_nvme_attach_controller" 00:14:22.281 } 00:14:22.281 EOF 00:14:22.281 )") 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:22.281 08:54:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:22.281 "params": { 00:14:22.281 "name": "Nvme1", 00:14:22.281 "trtype": "tcp", 00:14:22.281 "traddr": "10.0.0.2", 00:14:22.281 "adrfam": "ipv4", 00:14:22.281 "trsvcid": "4420", 00:14:22.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:22.281 "hdgst": false, 00:14:22.281 "ddgst": false 00:14:22.281 }, 00:14:22.281 "method": "bdev_nvme_attach_controller" 00:14:22.281 }' 00:14:22.281 [2024-05-15 08:54:38.475400] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:22.281 [2024-05-15 08:54:38.475499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75264 ] 00:14:22.539 [2024-05-15 08:54:38.615286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.539 [2024-05-15 08:54:38.685529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.798 Running I/O for 10 seconds... 00:14:32.794 00:14:32.794 Latency(us) 00:14:32.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.794 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:32.794 Verification LBA range: start 0x0 length 0x1000 00:14:32.794 Nvme1n1 : 10.02 5732.40 44.78 0.00 0.00 22255.75 3321.48 35031.97 00:14:32.794 =================================================================================================================== 00:14:32.794 Total : 5732.40 44.78 0.00 0.00 22255.75 3321.48 35031.97 00:14:33.052 08:54:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=75382 00:14:33.052 08:54:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:33.052 08:54:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:33.052 08:54:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:33.052 08:54:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:33.052 08:54:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:33.052 08:54:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:33.052 08:54:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:33.052 08:54:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:33.052 { 00:14:33.052 "params": { 00:14:33.052 "name": "Nvme$subsystem", 00:14:33.052 "trtype": "$TEST_TRANSPORT", 00:14:33.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:33.052 "adrfam": "ipv4", 00:14:33.052 "trsvcid": "$NVMF_PORT", 00:14:33.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:33.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:33.053 "hdgst": ${hdgst:-false}, 00:14:33.053 "ddgst": ${ddgst:-false} 00:14:33.053 }, 00:14:33.053 "method": "bdev_nvme_attach_controller" 00:14:33.053 } 00:14:33.053 EOF 00:14:33.053 )") 00:14:33.053 08:54:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:33.053 [2024-05-15 08:54:49.034432] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.034481] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 08:54:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 08:54:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:33.053 08:54:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:33.053 "params": { 00:14:33.053 "name": "Nvme1", 00:14:33.053 "trtype": "tcp", 00:14:33.053 "traddr": "10.0.0.2", 00:14:33.053 "adrfam": "ipv4", 00:14:33.053 "trsvcid": "4420", 00:14:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:33.053 "hdgst": false, 00:14:33.053 "ddgst": false 00:14:33.053 }, 00:14:33.053 "method": "bdev_nvme_attach_controller" 00:14:33.053 }' 00:14:33.053 [2024-05-15 08:54:49.046396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.046428] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.058396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.058428] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.068294] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:33.053 [2024-05-15 08:54:49.068367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75382 ] 00:14:33.053 [2024-05-15 08:54:49.070404] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.070430] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.082418] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.082613] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.094417] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.094579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.106419] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.106580] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.118419] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.118574] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.130421] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.130574] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.142422] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.142574] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.154428] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.154579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.166430] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.166581] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.178435] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.178467] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.190453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.190487] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.202441] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.202472] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 [2024-05-15 08:54:49.204426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.214476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.214517] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.226455] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.226490] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.238496] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.238541] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.250464] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.250498] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.262462] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.262493] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 [2024-05-15 08:54:49.262913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.053 [2024-05-15 08:54:49.274475] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.053 [2024-05-15 08:54:49.274509] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.053 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.286491] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.286533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.298491] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.298531] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.310508] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.310553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.322472] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.322501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.334512] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.334549] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.346502] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.346536] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.358504] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.358538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.370508] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.370545] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.382502] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.382533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.394518] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.394554] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 Running I/O for 5 seconds... 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.411123] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.411162] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.420310] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.420358] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.437037] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.437075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.454056] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.312 [2024-05-15 08:54:49.454094] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.312 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.312 [2024-05-15 08:54:49.471300] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.313 [2024-05-15 08:54:49.471339] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.313 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.313 [2024-05-15 08:54:49.487307] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.313 [2024-05-15 08:54:49.487347] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.313 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.313 [2024-05-15 08:54:49.503703] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.313 [2024-05-15 08:54:49.503748] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.313 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.313 [2024-05-15 08:54:49.514505] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.313 [2024-05-15 08:54:49.514543] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.313 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.313 [2024-05-15 08:54:49.529374] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.313 [2024-05-15 08:54:49.529414] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.313 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.545333] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.545373] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.561472] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.561513] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.577190] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.577229] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.593470] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.593509] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.609098] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.609140] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.627823] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.627861] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.642875] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.642913] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.653179] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.653217] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.667785] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.667823] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.679244] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.679304] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.693108] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.693145] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.709167] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.709204] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.728469] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.728529] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.743106] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.743157] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.571 [2024-05-15 08:54:49.753530] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.571 [2024-05-15 08:54:49.753580] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.571 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.572 [2024-05-15 08:54:49.768849] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.572 [2024-05-15 08:54:49.768887] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.572 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.572 [2024-05-15 08:54:49.784894] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.572 [2024-05-15 08:54:49.784930] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.572 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.572 [2024-05-15 08:54:49.795486] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.572 [2024-05-15 08:54:49.795524] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.572 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.811330] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.811370] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.827408] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.827447] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.844291] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.844357] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.860067] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.860111] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.870864] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.870902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.886096] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.886133] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.901720] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.901757] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.916271] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.916310] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.931984] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.932023] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.941821] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.941862] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.957995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.958034] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.973516] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.973554] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.984422] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.984461] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:49.999334] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:49.999372] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:50.009903] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:50.009940] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:50.025454] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.831 [2024-05-15 08:54:50.025500] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.831 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.831 [2024-05-15 08:54:50.041823] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.832 [2024-05-15 08:54:50.041864] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.832 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:33.832 [2024-05-15 08:54:50.058616] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.832 [2024-05-15 08:54:50.058671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.832 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.070003] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.070044] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.081841] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.081904] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.098003] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.098051] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.114397] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.114452] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.131095] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.131136] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.147087] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.147129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.157455] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.157497] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.170003] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.170043] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.184702] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.184758] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.195534] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.195606] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.208056] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.208112] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.223787] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.223845] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.239768] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.239811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.256584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.256657] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.273284] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.273350] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.289325] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.289365] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.300415] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.300457] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.090 [2024-05-15 08:54:50.314589] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.090 [2024-05-15 08:54:50.314629] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.090 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.348 [2024-05-15 08:54:50.331278] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.348 [2024-05-15 08:54:50.331320] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.348 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.348 [2024-05-15 08:54:50.347139] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.348 [2024-05-15 08:54:50.347190] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.348 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.348 [2024-05-15 08:54:50.362753] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.348 [2024-05-15 08:54:50.362792] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.348 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.348 [2024-05-15 08:54:50.372862] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.348 [2024-05-15 08:54:50.372900] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.348 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.348 [2024-05-15 08:54:50.388510] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.348 [2024-05-15 08:54:50.388605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.348 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.348 [2024-05-15 08:54:50.406163] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.348 [2024-05-15 08:54:50.406202] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.348 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.348 [2024-05-15 08:54:50.420445] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.348 [2024-05-15 08:54:50.420487] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.348 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.348 [2024-05-15 08:54:50.438390] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.348 [2024-05-15 08:54:50.438431] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.348 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.348 [2024-05-15 08:54:50.453651] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.348 [2024-05-15 08:54:50.453693] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.349 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.349 [2024-05-15 08:54:50.470899] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.349 [2024-05-15 08:54:50.470939] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.349 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.349 [2024-05-15 08:54:50.486935] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.349 [2024-05-15 08:54:50.486975] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.349 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.349 [2024-05-15 08:54:50.503746] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.349 [2024-05-15 08:54:50.503785] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.349 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.349 [2024-05-15 08:54:50.514510] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.349 [2024-05-15 08:54:50.514548] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.349 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.349 [2024-05-15 08:54:50.529761] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.349 [2024-05-15 08:54:50.529801] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.349 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.349 [2024-05-15 08:54:50.545300] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.349 [2024-05-15 08:54:50.545341] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.349 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.349 [2024-05-15 08:54:50.562038] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.349 [2024-05-15 08:54:50.562081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.349 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.349 [2024-05-15 08:54:50.578974] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.349 [2024-05-15 08:54:50.579016] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.595371] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.595412] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.611109] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.611149] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.626942] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.626989] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.637882] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.637926] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.653544] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.653596] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.669538] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.669590] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.685419] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.685474] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.701081] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.701134] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.710758] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.710795] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.725619] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.725654] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.740859] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.740896] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.758899] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.758943] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.775208] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.775279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.790732] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.790770] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.807162] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.807200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.607 [2024-05-15 08:54:50.823914] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.607 [2024-05-15 08:54:50.823985] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.607 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.866 [2024-05-15 08:54:50.840554] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.866 [2024-05-15 08:54:50.840604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.866 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.866 [2024-05-15 08:54:50.856997] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.866 [2024-05-15 08:54:50.857037] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.866 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.866 [2024-05-15 08:54:50.873124] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.866 [2024-05-15 08:54:50.873166] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:50.889549] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:50.889627] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:50.906714] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:50.906751] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:50.921817] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:50.921853] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:50.937694] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:50.937731] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:50.955935] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:50.955991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:50.971251] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:50.971305] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:50.981567] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:50.981616] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:50.995957] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:50.996012] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:51.013807] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:51.013851] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:51.029544] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:51.029614] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:51.045747] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:51.045798] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:51.062261] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:51.062318] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:51.078319] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:51.078377] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:34.867 [2024-05-15 08:54:51.089213] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.867 [2024-05-15 08:54:51.089251] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.867 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.126 [2024-05-15 08:54:51.104053] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.126 [2024-05-15 08:54:51.104091] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.126 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.126 [2024-05-15 08:54:51.121242] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.126 [2024-05-15 08:54:51.121296] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.126 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.126 [2024-05-15 08:54:51.137618] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.126 [2024-05-15 08:54:51.137671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.126 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.126 [2024-05-15 08:54:51.154593] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.126 [2024-05-15 08:54:51.154648] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.126 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.126 [2024-05-15 08:54:51.171552] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.126 [2024-05-15 08:54:51.171622] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.126 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.126 [2024-05-15 08:54:51.187165] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.126 [2024-05-15 08:54:51.187218] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.126 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.126 [2024-05-15 08:54:51.203089] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.126 [2024-05-15 08:54:51.203129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.126 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.126 [2024-05-15 08:54:51.213689] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.127 [2024-05-15 08:54:51.213725] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.127 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.127 [2024-05-15 08:54:51.228476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.127 [2024-05-15 08:54:51.228524] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.127 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.127 [2024-05-15 08:54:51.247437] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.127 [2024-05-15 08:54:51.247491] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.127 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.127 [2024-05-15 08:54:51.263103] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.127 [2024-05-15 08:54:51.263142] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.127 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.127 [2024-05-15 08:54:51.278166] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.127 [2024-05-15 08:54:51.278220] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.127 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.127 [2024-05-15 08:54:51.296670] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.127 [2024-05-15 08:54:51.296724] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.127 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.127 [2024-05-15 08:54:51.312515] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.127 [2024-05-15 08:54:51.312553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.127 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.127 [2024-05-15 08:54:51.328469] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.127 [2024-05-15 08:54:51.328507] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.127 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.127 [2024-05-15 08:54:51.338444] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.127 [2024-05-15 08:54:51.338481] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.127 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.127 [2024-05-15 08:54:51.353207] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.127 [2024-05-15 08:54:51.353259] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.127 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.364068] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.364104] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.379183] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.379222] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.396733] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.396775] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.412317] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.412356] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.429292] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.429332] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.445692] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.445732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.462246] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.462294] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.478390] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.478432] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.488459] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.488497] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.503386] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.503426] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.519088] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.519126] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.535720] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.535759] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.552046] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.552084] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.568519] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.568558] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.585013] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.585068] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.386 [2024-05-15 08:54:51.601282] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.386 [2024-05-15 08:54:51.601315] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.386 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.618755] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.618796] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.633983] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.634038] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.648853] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.648892] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.665099] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.665137] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.680933] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.680976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.696921] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.696965] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.706985] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.707023] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.721698] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.721735] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.737781] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.737819] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.754107] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.754146] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.770946] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.770984] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.786040] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.786079] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.802556] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.802607] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.819823] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.819862] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.654 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.654 [2024-05-15 08:54:51.834823] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.654 [2024-05-15 08:54:51.834862] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.655 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.655 [2024-05-15 08:54:51.850874] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.655 [2024-05-15 08:54:51.850918] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.655 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.655 [2024-05-15 08:54:51.869612] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.655 [2024-05-15 08:54:51.869668] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.655 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.655 [2024-05-15 08:54:51.884928] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.655 [2024-05-15 08:54:51.884967] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:51.900988] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:51.901031] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:51.918625] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:51.918678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:51.934877] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:51.934915] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:51.951529] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:51.951579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:51.967684] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:51.967722] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:51.983412] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:51.983466] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:51.999955] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:52.000011] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:52.010834] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:52.010872] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:52.025882] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:52.025923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:52.042668] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:52.042706] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:52.058937] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:52.058976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:52.075188] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:52.075227] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:52.091018] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.918 [2024-05-15 08:54:52.091057] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.918 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.918 [2024-05-15 08:54:52.101180] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.919 [2024-05-15 08:54:52.101217] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.919 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.919 [2024-05-15 08:54:52.116528] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.919 [2024-05-15 08:54:52.116581] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.919 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.919 [2024-05-15 08:54:52.132370] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.919 [2024-05-15 08:54:52.132409] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.919 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:35.919 [2024-05-15 08:54:52.142354] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.919 [2024-05-15 08:54:52.142407] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.919 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.177 [2024-05-15 08:54:52.158518] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.177 [2024-05-15 08:54:52.158557] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.177 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.177 [2024-05-15 08:54:52.176617] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.177 [2024-05-15 08:54:52.176656] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.177 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.177 [2024-05-15 08:54:52.193033] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.177 [2024-05-15 08:54:52.193073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.177 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.177 [2024-05-15 08:54:52.209123] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.177 [2024-05-15 08:54:52.209174] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.177 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.177 [2024-05-15 08:54:52.225237] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.177 [2024-05-15 08:54:52.225292] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.177 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.177 [2024-05-15 08:54:52.235299] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.177 [2024-05-15 08:54:52.235352] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.177 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.177 [2024-05-15 08:54:52.250213] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.177 [2024-05-15 08:54:52.250264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.177 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.177 [2024-05-15 08:54:52.262829] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.177 [2024-05-15 08:54:52.262888] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.177 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.177 [2024-05-15 08:54:52.278943] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.177 [2024-05-15 08:54:52.278990] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.177 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.177 [2024-05-15 08:54:52.294988] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.177 [2024-05-15 08:54:52.295026] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.177 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.177 [2024-05-15 08:54:52.312220] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.178 [2024-05-15 08:54:52.312269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.178 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.178 [2024-05-15 08:54:52.327904] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.178 [2024-05-15 08:54:52.327957] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.178 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.178 [2024-05-15 08:54:52.337708] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.178 [2024-05-15 08:54:52.337761] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.178 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.178 [2024-05-15 08:54:52.352542] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.178 [2024-05-15 08:54:52.352605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.178 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.178 [2024-05-15 08:54:52.368719] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.178 [2024-05-15 08:54:52.368777] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.178 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.178 [2024-05-15 08:54:52.385805] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.178 [2024-05-15 08:54:52.385854] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.178 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.178 [2024-05-15 08:54:52.402192] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.178 [2024-05-15 08:54:52.402230] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.178 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.436 [2024-05-15 08:54:52.419429] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.436 [2024-05-15 08:54:52.419467] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.436 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.436 [2024-05-15 08:54:52.435472] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.436 [2024-05-15 08:54:52.435538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.436 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.436 [2024-05-15 08:54:52.452868] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.436 [2024-05-15 08:54:52.452907] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.469403] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.469448] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.485242] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.485297] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.501729] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.501782] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.518214] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.518270] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.534635] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.534722] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.552132] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.552199] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.568537] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.568604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.584108] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.584147] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.599737] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.599790] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.618048] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.618100] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.633911] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.633962] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.651287] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.651340] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.437 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.437 [2024-05-15 08:54:52.667686] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.437 [2024-05-15 08:54:52.667741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.695 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.695 [2024-05-15 08:54:52.683565] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.695 [2024-05-15 08:54:52.683632] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.695 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.695 [2024-05-15 08:54:52.701023] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.695 [2024-05-15 08:54:52.701074] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.695 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.695 [2024-05-15 08:54:52.715779] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.695 [2024-05-15 08:54:52.715822] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.695 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.695 [2024-05-15 08:54:52.732455] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.695 [2024-05-15 08:54:52.732494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.696 [2024-05-15 08:54:52.749149] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.696 [2024-05-15 08:54:52.749212] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.696 [2024-05-15 08:54:52.766251] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.696 [2024-05-15 08:54:52.766292] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.696 [2024-05-15 08:54:52.782064] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.696 [2024-05-15 08:54:52.782110] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.696 [2024-05-15 08:54:52.798337] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.696 [2024-05-15 08:54:52.798403] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.696 [2024-05-15 08:54:52.817770] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.696 [2024-05-15 08:54:52.817818] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.696 [2024-05-15 08:54:52.832807] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.696 [2024-05-15 08:54:52.832850] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.696 [2024-05-15 08:54:52.850275] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.696 [2024-05-15 08:54:52.850333] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.696 [2024-05-15 08:54:52.866853] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.696 [2024-05-15 08:54:52.866916] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.696 [2024-05-15 08:54:52.883186] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.696 [2024-05-15 08:54:52.883243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.696 [2024-05-15 08:54:52.897690] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.696 [2024-05-15 08:54:52.897750] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.696 [2024-05-15 08:54:52.914024] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.696 [2024-05-15 08:54:52.914093] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.696 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.954 [2024-05-15 08:54:52.931041] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.954 [2024-05-15 08:54:52.931104] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.954 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.954 [2024-05-15 08:54:52.947057] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.954 [2024-05-15 08:54:52.947109] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:52.956474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:52.956514] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:52.971795] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:52.971835] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:52.988982] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:52.989025] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.005944] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.005988] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.021908] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.021953] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.041053] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.041111] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.056453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.056490] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.072348] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.072387] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.089426] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.089470] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.106161] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.106209] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.122512] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.122553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.139132] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.139191] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.155305] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.155351] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.165281] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.165336] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:36.955 [2024-05-15 08:54:53.180586] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.955 [2024-05-15 08:54:53.180636] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.955 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.196669] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.196709] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.207763] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.207801] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.222754] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.222794] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.240286] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.240355] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.256852] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.256891] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.273445] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.273495] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.289894] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.289983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.306753] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.306811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.321464] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.321514] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.338119] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.338159] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.353840] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.353896] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.370155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.370214] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.386327] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.386385] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.403616] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.403700] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.418329] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.418379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.214 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.214 [2024-05-15 08:54:53.435142] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.214 [2024-05-15 08:54:53.435198] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.215 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.473 [2024-05-15 08:54:53.450066] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.473 [2024-05-15 08:54:53.450114] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.473 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.473 [2024-05-15 08:54:53.467128] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.473 [2024-05-15 08:54:53.467219] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.473 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.473 [2024-05-15 08:54:53.482495] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.473 [2024-05-15 08:54:53.482573] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.473 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.473 [2024-05-15 08:54:53.499410] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.473 [2024-05-15 08:54:53.499451] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.473 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.473 [2024-05-15 08:54:53.515647] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.473 [2024-05-15 08:54:53.515687] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.473 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.473 [2024-05-15 08:54:53.531738] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.473 [2024-05-15 08:54:53.531779] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.474 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.474 [2024-05-15 08:54:53.550105] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.474 [2024-05-15 08:54:53.550177] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.474 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.474 [2024-05-15 08:54:53.566337] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.474 [2024-05-15 08:54:53.566384] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.474 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.474 [2024-05-15 08:54:53.582686] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.474 [2024-05-15 08:54:53.582727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.474 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.474 [2024-05-15 08:54:53.598207] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.474 [2024-05-15 08:54:53.598247] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.474 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.474 [2024-05-15 08:54:53.616474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.474 [2024-05-15 08:54:53.616536] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.474 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.474 [2024-05-15 08:54:53.631204] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.474 [2024-05-15 08:54:53.631257] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.474 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.474 [2024-05-15 08:54:53.647594] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.474 [2024-05-15 08:54:53.647638] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.474 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.474 [2024-05-15 08:54:53.664735] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.474 [2024-05-15 08:54:53.664787] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.474 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.474 [2024-05-15 08:54:53.680629] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.474 [2024-05-15 08:54:53.680678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.474 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.474 [2024-05-15 08:54:53.691235] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.474 [2024-05-15 08:54:53.691277] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.474 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.706252] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.706317] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.722265] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.722329] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.733100] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.733142] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.744271] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.744309] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.759674] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.759717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.770490] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.770531] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.785549] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.785600] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.802097] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.802149] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.818082] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.818135] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.834146] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.834184] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.845014] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.845055] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.860236] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.860275] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.876391] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.876438] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.893030] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.733 [2024-05-15 08:54:53.893072] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.733 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.733 [2024-05-15 08:54:53.909480] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.734 [2024-05-15 08:54:53.909531] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.734 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.734 [2024-05-15 08:54:53.927155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.734 [2024-05-15 08:54:53.927222] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.734 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.734 [2024-05-15 08:54:53.942181] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.734 [2024-05-15 08:54:53.942226] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.734 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.734 [2024-05-15 08:54:53.958222] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.734 [2024-05-15 08:54:53.958262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.734 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:53.976255] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:53.976298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:53.991995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:53.992040] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:54.009055] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:54.009109] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:54.025823] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:54.025867] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:54.041680] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:54.041735] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:54.058800] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:54.058847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:54.074852] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:54.074895] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:54.085107] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:54.085163] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:54.096890] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:54.096930] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:54.112670] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:54.112716] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:54.129047] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:54.129101] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:54.146821] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:54.146859] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.993 [2024-05-15 08:54:54.162861] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.993 [2024-05-15 08:54:54.162918] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.993 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.994 [2024-05-15 08:54:54.180146] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.994 [2024-05-15 08:54:54.180220] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.994 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.994 [2024-05-15 08:54:54.196201] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.994 [2024-05-15 08:54:54.196240] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.994 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:37.994 [2024-05-15 08:54:54.212540] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.994 [2024-05-15 08:54:54.212590] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.994 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.253 [2024-05-15 08:54:54.230386] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.253 [2024-05-15 08:54:54.230439] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.253 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.253 [2024-05-15 08:54:54.246890] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.253 [2024-05-15 08:54:54.246951] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.253 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.253 [2024-05-15 08:54:54.263686] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.253 [2024-05-15 08:54:54.263732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.253 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.253 [2024-05-15 08:54:54.280709] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.253 [2024-05-15 08:54:54.280768] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.253 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.253 [2024-05-15 08:54:54.291618] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.253 [2024-05-15 08:54:54.291661] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.253 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.253 [2024-05-15 08:54:54.307000] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.253 [2024-05-15 08:54:54.307062] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.253 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.253 [2024-05-15 08:54:54.322824] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.253 [2024-05-15 08:54:54.322880] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.253 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.253 [2024-05-15 08:54:54.339982] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.253 [2024-05-15 08:54:54.340037] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.254 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.254 [2024-05-15 08:54:54.355975] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.254 [2024-05-15 08:54:54.356015] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.254 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.254 [2024-05-15 08:54:54.372364] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.254 [2024-05-15 08:54:54.372409] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.254 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.254 [2024-05-15 08:54:54.388855] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.254 [2024-05-15 08:54:54.388898] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.254 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.254 [2024-05-15 08:54:54.403844] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.254 [2024-05-15 08:54:54.403884] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.254 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.254 00:14:38.254 Latency(us) 00:14:38.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.254 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:38.254 Nvme1n1 : 5.01 11299.85 88.28 0.00 0.00 11312.55 4885.41 21328.99 00:14:38.254 =================================================================================================================== 00:14:38.254 Total : 11299.85 88.28 0.00 0.00 11312.55 4885.41 21328.99 00:14:38.254 [2024-05-15 08:54:54.413352] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.254 [2024-05-15 08:54:54.413402] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.254 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.254 [2024-05-15 08:54:54.425357] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.254 [2024-05-15 08:54:54.425406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.254 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.254 [2024-05-15 08:54:54.437369] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.254 [2024-05-15 08:54:54.437419] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.254 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.254 [2024-05-15 08:54:54.449371] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.254 [2024-05-15 08:54:54.449420] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.254 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.254 [2024-05-15 08:54:54.461385] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.254 [2024-05-15 08:54:54.461437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.254 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.254 [2024-05-15 08:54:54.473375] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.254 [2024-05-15 08:54:54.473420] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.254 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.254 [2024-05-15 08:54:54.485366] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.254 [2024-05-15 08:54:54.485407] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.513 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.513 [2024-05-15 08:54:54.497353] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.513 [2024-05-15 08:54:54.497389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.513 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.513 [2024-05-15 08:54:54.509352] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.513 [2024-05-15 08:54:54.509385] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.513 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.513 [2024-05-15 08:54:54.521374] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.513 [2024-05-15 08:54:54.521418] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.513 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.513 [2024-05-15 08:54:54.533377] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.513 [2024-05-15 08:54:54.533415] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.513 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.513 [2024-05-15 08:54:54.545362] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.513 [2024-05-15 08:54:54.545395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.513 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.513 [2024-05-15 08:54:54.557366] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.513 [2024-05-15 08:54:54.557399] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.513 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.513 [2024-05-15 08:54:54.569396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.513 [2024-05-15 08:54:54.569437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.513 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.513 [2024-05-15 08:54:54.581367] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.513 [2024-05-15 08:54:54.581397] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.513 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.513 [2024-05-15 08:54:54.589359] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.513 [2024-05-15 08:54:54.589387] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.513 2024/05/15 08:54:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:38.513 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75382) - No such process 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 75382 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.513 delay0 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.513 08:54:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:38.771 [2024-05-15 08:54:54.777896] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:45.334 Initializing NVMe Controllers 00:14:45.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:45.334 Initialization complete. Launching workers. 00:14:45.334 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 93 00:14:45.334 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 380, failed to submit 33 00:14:45.334 success 196, unsuccess 184, failed 0 00:14:45.334 08:55:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:45.334 08:55:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:45.334 08:55:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.334 08:55:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:45.334 08:55:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.334 08:55:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:45.334 08:55:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.334 08:55:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.334 rmmod nvme_tcp 00:14:45.335 rmmod nvme_fabrics 00:14:45.335 rmmod nvme_keyring 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 75208 ']' 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 75208 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 75208 ']' 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 75208 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75208 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:45.335 killing process with pid 75208 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75208' 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 75208 00:14:45.335 [2024-05-15 08:55:00.947201] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:45.335 08:55:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 75208 00:14:45.335 08:55:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.335 08:55:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:45.335 08:55:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:45.335 08:55:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.335 08:55:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.335 08:55:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.335 08:55:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.335 08:55:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.335 08:55:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:45.335 00:14:45.335 real 0m24.437s 00:14:45.335 user 0m39.763s 00:14:45.335 sys 0m6.380s 00:14:45.335 08:55:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.335 08:55:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.335 ************************************ 00:14:45.335 END TEST nvmf_zcopy 00:14:45.335 ************************************ 00:14:45.335 08:55:01 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:45.335 08:55:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:45.335 08:55:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.335 08:55:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:45.335 ************************************ 00:14:45.335 START TEST nvmf_nmic 00:14:45.335 ************************************ 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:45.335 * Looking for test storage... 00:14:45.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:45.335 Cannot find device "nvmf_tgt_br" 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.335 Cannot find device "nvmf_tgt_br2" 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:45.335 Cannot find device "nvmf_tgt_br" 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:14:45.335 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:45.336 Cannot find device "nvmf_tgt_br2" 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.336 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:45.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:14:45.595 00:14:45.595 --- 10.0.0.2 ping statistics --- 00:14:45.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.595 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:45.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:14:45.595 00:14:45.595 --- 10.0.0.3 ping statistics --- 00:14:45.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.595 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:45.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:45.595 00:14:45.595 --- 10.0.0.1 ping statistics --- 00:14:45.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.595 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=75702 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 75702 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 75702 ']' 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:45.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:45.595 08:55:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:45.595 [2024-05-15 08:55:01.769123] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:45.595 [2024-05-15 08:55:01.769242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.854 [2024-05-15 08:55:01.908855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.854 [2024-05-15 08:55:01.970374] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.854 [2024-05-15 08:55:01.970447] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.854 [2024-05-15 08:55:01.970467] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.854 [2024-05-15 08:55:01.970480] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.854 [2024-05-15 08:55:01.970491] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.854 [2024-05-15 08:55:01.970665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.854 [2024-05-15 08:55:01.970959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.854 [2024-05-15 08:55:01.971196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.854 [2024-05-15 08:55:01.971211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.854 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:45.854 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:14:45.854 08:55:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.854 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.854 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:46.114 [2024-05-15 08:55:02.118033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:46.114 Malloc0 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:46.114 [2024-05-15 08:55:02.184380] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:46.114 [2024-05-15 08:55:02.184692] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.114 test case1: single bdev can't be used in multiple subsystems 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:46.114 [2024-05-15 08:55:02.208485] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:46.114 [2024-05-15 08:55:02.208534] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:46.114 [2024-05-15 08:55:02.208549] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.114 2024/05/15 08:55:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:14:46.114 request: 00:14:46.114 { 00:14:46.114 "method": "nvmf_subsystem_add_ns", 00:14:46.114 "params": { 00:14:46.114 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:46.114 "namespace": { 00:14:46.114 "bdev_name": "Malloc0", 00:14:46.114 "no_auto_visible": false 00:14:46.114 } 00:14:46.114 } 00:14:46.114 } 00:14:46.114 Got JSON-RPC error response 00:14:46.114 GoRPCClient: error on JSON-RPC call 00:14:46.114 Adding namespace failed - expected result. 00:14:46.114 test case2: host connect to nvmf target in multiple paths 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:46.114 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:46.115 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:46.115 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:46.115 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:46.115 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:46.115 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.115 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:46.115 [2024-05-15 08:55:02.220652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:46.115 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.115 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:46.373 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:46.373 08:55:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.373 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:14:46.373 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.373 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:46.373 08:55:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:14:48.904 08:55:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:48.904 08:55:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:48.904 08:55:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.904 08:55:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:48.904 08:55:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.904 08:55:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:14:48.904 08:55:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:48.904 [global] 00:14:48.904 thread=1 00:14:48.904 invalidate=1 00:14:48.904 rw=write 00:14:48.904 time_based=1 00:14:48.904 runtime=1 00:14:48.904 ioengine=libaio 00:14:48.904 direct=1 00:14:48.904 bs=4096 00:14:48.904 iodepth=1 00:14:48.904 norandommap=0 00:14:48.904 numjobs=1 00:14:48.904 00:14:48.904 verify_dump=1 00:14:48.904 verify_backlog=512 00:14:48.904 verify_state_save=0 00:14:48.904 do_verify=1 00:14:48.904 verify=crc32c-intel 00:14:48.904 [job0] 00:14:48.904 filename=/dev/nvme0n1 00:14:48.904 Could not set queue depth (nvme0n1) 00:14:48.904 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:48.904 fio-3.35 00:14:48.904 Starting 1 thread 00:14:49.874 00:14:49.874 job0: (groupid=0, jobs=1): err= 0: pid=75798: Wed May 15 08:55:05 2024 00:14:49.874 read: IOPS=2755, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:14:49.874 slat (nsec): min=13747, max=64279, avg=19941.90, stdev=6732.87 00:14:49.874 clat (usec): min=130, max=412, avg=169.55, stdev=29.41 00:14:49.874 lat (usec): min=144, max=437, avg=189.50, stdev=32.03 00:14:49.874 clat percentiles (usec): 00:14:49.874 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:14:49.874 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 167], 00:14:49.874 | 70.00th=[ 174], 80.00th=[ 186], 90.00th=[ 215], 95.00th=[ 231], 00:14:49.874 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 351], 99.95th=[ 359], 00:14:49.874 | 99.99th=[ 412] 00:14:49.874 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:49.875 slat (usec): min=20, max=145, avg=30.80, stdev=10.97 00:14:49.875 clat (usec): min=92, max=341, avg=120.42, stdev=18.00 00:14:49.875 lat (usec): min=113, max=487, avg=151.22, stdev=22.23 00:14:49.875 clat percentiles (usec): 00:14:49.875 | 1.00th=[ 96], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 106], 00:14:49.875 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 118], 60.00th=[ 121], 00:14:49.875 | 70.00th=[ 125], 80.00th=[ 133], 90.00th=[ 147], 95.00th=[ 157], 00:14:49.875 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 198], 99.95th=[ 235], 00:14:49.875 | 99.99th=[ 343] 00:14:49.875 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:14:49.875 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:49.875 lat (usec) : 100=3.84%, 250=95.18%, 500=0.98% 00:14:49.875 cpu : usr=2.80%, sys=11.20%, ctx=5836, majf=0, minf=2 00:14:49.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:49.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:49.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:49.875 issued rwts: total=2758,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:49.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:49.875 00:14:49.875 Run status group 0 (all jobs): 00:14:49.875 READ: bw=10.8MiB/s (11.3MB/s), 10.8MiB/s-10.8MiB/s (11.3MB/s-11.3MB/s), io=10.8MiB (11.3MB), run=1001-1001msec 00:14:49.875 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:14:49.875 00:14:49.875 Disk stats (read/write): 00:14:49.875 nvme0n1: ios=2610/2632, merge=0/0, ticks=452/352, in_queue=804, util=91.28% 00:14:49.875 08:55:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.875 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.875 rmmod nvme_tcp 00:14:49.875 rmmod nvme_fabrics 00:14:49.875 rmmod nvme_keyring 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 75702 ']' 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 75702 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 75702 ']' 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 75702 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75702 00:14:50.133 killing process with pid 75702 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75702' 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 75702 00:14:50.133 [2024-05-15 08:55:06.155491] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:50.133 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 75702 00:14:50.391 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:50.391 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:50.391 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:50.391 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.391 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.391 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.391 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.391 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.391 08:55:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:50.391 00:14:50.391 real 0m5.199s 00:14:50.391 user 0m16.850s 00:14:50.391 sys 0m1.288s 00:14:50.391 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:50.391 ************************************ 00:14:50.391 END TEST nvmf_nmic 00:14:50.391 ************************************ 00:14:50.391 08:55:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.392 08:55:06 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:50.392 08:55:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:50.392 08:55:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:50.392 08:55:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:50.392 ************************************ 00:14:50.392 START TEST nvmf_fio_target 00:14:50.392 ************************************ 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:50.392 * Looking for test storage... 00:14:50.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:50.392 Cannot find device "nvmf_tgt_br" 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.392 Cannot find device "nvmf_tgt_br2" 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:50.392 Cannot find device "nvmf_tgt_br" 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:50.392 Cannot find device "nvmf_tgt_br2" 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:14:50.392 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:50.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:50.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:50.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:14:50.650 00:14:50.650 --- 10.0.0.2 ping statistics --- 00:14:50.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.650 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:50.650 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:50.650 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:14:50.650 00:14:50.650 --- 10.0.0.3 ping statistics --- 00:14:50.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.650 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:50.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:50.650 00:14:50.650 --- 10.0.0.1 ping statistics --- 00:14:50.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.650 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.650 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.907 08:55:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:50.907 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.907 08:55:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:50.907 08:55:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.907 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=75977 00:14:50.907 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 75977 00:14:50.908 08:55:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:50.908 08:55:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 75977 ']' 00:14:50.908 08:55:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.908 08:55:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:50.908 08:55:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.908 08:55:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:50.908 08:55:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.908 [2024-05-15 08:55:06.964881] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:50.908 [2024-05-15 08:55:06.964986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.908 [2024-05-15 08:55:07.105411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.166 [2024-05-15 08:55:07.178797] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.166 [2024-05-15 08:55:07.178859] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.166 [2024-05-15 08:55:07.178873] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.166 [2024-05-15 08:55:07.178883] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.166 [2024-05-15 08:55:07.178892] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.166 [2024-05-15 08:55:07.179109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.166 [2024-05-15 08:55:07.179215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.166 [2024-05-15 08:55:07.179396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.166 [2024-05-15 08:55:07.179404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.732 08:55:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:51.732 08:55:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:14:51.732 08:55:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.732 08:55:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.732 08:55:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.732 08:55:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.732 08:55:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:51.989 [2024-05-15 08:55:08.144828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.989 08:55:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:52.248 08:55:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:52.248 08:55:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:52.814 08:55:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:52.814 08:55:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:53.073 08:55:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:53.073 08:55:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:53.330 08:55:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:53.330 08:55:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:53.589 08:55:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:53.847 08:55:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:53.847 08:55:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:54.105 08:55:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:54.105 08:55:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:54.362 08:55:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:54.362 08:55:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:54.620 08:55:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:54.878 08:55:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:54.878 08:55:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:55.137 08:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:55.137 08:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:55.396 08:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.654 [2024-05-15 08:55:11.710725] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:55.654 [2024-05-15 08:55:11.711018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.654 08:55:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:55.912 08:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:56.170 08:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:56.170 08:55:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:56.170 08:55:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:14:56.170 08:55:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.170 08:55:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:14:56.170 08:55:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:14:56.170 08:55:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:14:58.708 08:55:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:58.708 08:55:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:58.708 08:55:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.708 08:55:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:14:58.708 08:55:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.708 08:55:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:14:58.708 08:55:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:58.708 [global] 00:14:58.708 thread=1 00:14:58.708 invalidate=1 00:14:58.708 rw=write 00:14:58.708 time_based=1 00:14:58.708 runtime=1 00:14:58.708 ioengine=libaio 00:14:58.708 direct=1 00:14:58.708 bs=4096 00:14:58.708 iodepth=1 00:14:58.708 norandommap=0 00:14:58.708 numjobs=1 00:14:58.708 00:14:58.708 verify_dump=1 00:14:58.708 verify_backlog=512 00:14:58.708 verify_state_save=0 00:14:58.708 do_verify=1 00:14:58.708 verify=crc32c-intel 00:14:58.708 [job0] 00:14:58.708 filename=/dev/nvme0n1 00:14:58.708 [job1] 00:14:58.708 filename=/dev/nvme0n2 00:14:58.708 [job2] 00:14:58.708 filename=/dev/nvme0n3 00:14:58.708 [job3] 00:14:58.708 filename=/dev/nvme0n4 00:14:58.708 Could not set queue depth (nvme0n1) 00:14:58.708 Could not set queue depth (nvme0n2) 00:14:58.708 Could not set queue depth (nvme0n3) 00:14:58.708 Could not set queue depth (nvme0n4) 00:14:58.708 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:58.708 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:58.708 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:58.708 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:58.708 fio-3.35 00:14:58.708 Starting 4 threads 00:14:59.644 00:14:59.644 job0: (groupid=0, jobs=1): err= 0: pid=76276: Wed May 15 08:55:15 2024 00:14:59.644 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:14:59.644 slat (nsec): min=12170, max=62797, avg=20822.06, stdev=8107.19 00:14:59.644 clat (usec): min=149, max=41155, avg=229.20, stdev=906.71 00:14:59.644 lat (usec): min=164, max=41173, avg=250.03, stdev=906.64 00:14:59.644 clat percentiles (usec): 00:14:59.644 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:14:59.644 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:14:59.644 | 70.00th=[ 204], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 314], 00:14:59.644 | 99.00th=[ 400], 99.50th=[ 433], 99.90th=[ 523], 99.95th=[ 553], 00:14:59.644 | 99.99th=[41157] 00:14:59.644 write: IOPS=2387, BW=9550KiB/s (9780kB/s)(9560KiB/1001msec); 0 zone resets 00:14:59.644 slat (nsec): min=12810, max=95735, avg=29356.40, stdev=10193.72 00:14:59.644 clat (usec): min=107, max=467, avg=170.79, stdev=56.58 00:14:59.644 lat (usec): min=128, max=492, avg=200.14, stdev=57.39 00:14:59.644 clat percentiles (usec): 00:14:59.644 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:14:59.644 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 149], 00:14:59.644 | 70.00th=[ 204], 80.00th=[ 227], 90.00th=[ 269], 95.00th=[ 281], 00:14:59.644 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 375], 99.95th=[ 408], 00:14:59.644 | 99.99th=[ 469] 00:14:59.644 bw ( KiB/s): min=12288, max=12288, per=31.49%, avg=12288.00, stdev= 0.00, samples=1 00:14:59.644 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:59.644 lat (usec) : 250=80.08%, 500=19.85%, 750=0.05% 00:14:59.644 lat (msec) : 50=0.02% 00:14:59.644 cpu : usr=2.20%, sys=8.50%, ctx=4456, majf=0, minf=10 00:14:59.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.644 issued rwts: total=2048,2390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.644 job1: (groupid=0, jobs=1): err= 0: pid=76277: Wed May 15 08:55:15 2024 00:14:59.644 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:14:59.644 slat (nsec): min=10169, max=70079, avg=19717.70, stdev=6403.12 00:14:59.644 clat (usec): min=174, max=41224, avg=337.64, stdev=1079.28 00:14:59.644 lat (usec): min=194, max=41235, avg=357.36, stdev=1079.06 00:14:59.644 clat percentiles (usec): 00:14:59.644 | 1.00th=[ 247], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 273], 00:14:59.644 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:14:59.644 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 363], 00:14:59.644 | 99.00th=[ 445], 99.50th=[ 2802], 99.90th=[ 7439], 99.95th=[41157], 00:14:59.644 | 99.99th=[41157] 00:14:59.644 write: IOPS=1742, BW=6969KiB/s (7136kB/s)(6976KiB/1001msec); 0 zone resets 00:14:59.644 slat (usec): min=12, max=124, avg=28.19, stdev= 8.07 00:14:59.644 clat (usec): min=117, max=441, avg=226.40, stdev=29.80 00:14:59.644 lat (usec): min=142, max=466, avg=254.59, stdev=29.69 00:14:59.644 clat percentiles (usec): 00:14:59.644 | 1.00th=[ 145], 5.00th=[ 186], 10.00th=[ 200], 20.00th=[ 208], 00:14:59.644 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:14:59.644 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 281], 00:14:59.644 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 375], 99.95th=[ 441], 00:14:59.644 | 99.99th=[ 441] 00:14:59.644 bw ( KiB/s): min= 8192, max= 8192, per=20.99%, avg=8192.00, stdev= 0.00, samples=1 00:14:59.644 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:59.644 lat (usec) : 250=45.64%, 500=54.02%, 750=0.06%, 1000=0.03% 00:14:59.644 lat (msec) : 4=0.15%, 10=0.06%, 50=0.03% 00:14:59.644 cpu : usr=2.10%, sys=5.60%, ctx=3290, majf=0, minf=11 00:14:59.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.644 issued rwts: total=1536,1744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.644 job2: (groupid=0, jobs=1): err= 0: pid=76278: Wed May 15 08:55:15 2024 00:14:59.644 read: IOPS=2613, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:14:59.644 slat (nsec): min=13870, max=63597, avg=19545.30, stdev=5716.08 00:14:59.644 clat (usec): min=137, max=1783, avg=173.78, stdev=39.93 00:14:59.644 lat (usec): min=164, max=1799, avg=193.33, stdev=40.73 00:14:59.644 clat percentiles (usec): 00:14:59.644 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:14:59.644 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:14:59.644 | 70.00th=[ 178], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:14:59.644 | 99.00th=[ 210], 99.50th=[ 351], 99.90th=[ 611], 99.95th=[ 685], 00:14:59.644 | 99.99th=[ 1778] 00:14:59.644 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:59.644 slat (usec): min=20, max=109, avg=25.73, stdev= 6.40 00:14:59.644 clat (usec): min=107, max=838, avg=131.83, stdev=19.04 00:14:59.644 lat (usec): min=131, max=862, avg=157.56, stdev=21.15 00:14:59.644 clat percentiles (usec): 00:14:59.644 | 1.00th=[ 114], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 123], 00:14:59.644 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:14:59.644 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 151], 00:14:59.644 | 99.00th=[ 161], 99.50th=[ 172], 99.90th=[ 334], 99.95th=[ 433], 00:14:59.644 | 99.99th=[ 840] 00:14:59.644 bw ( KiB/s): min=12288, max=12288, per=31.49%, avg=12288.00, stdev= 0.00, samples=1 00:14:59.644 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:59.644 lat (usec) : 250=99.53%, 500=0.37%, 750=0.07%, 1000=0.02% 00:14:59.644 lat (msec) : 2=0.02% 00:14:59.644 cpu : usr=2.30%, sys=9.80%, ctx=5689, majf=0, minf=3 00:14:59.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.644 issued rwts: total=2616,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.644 job3: (groupid=0, jobs=1): err= 0: pid=76279: Wed May 15 08:55:15 2024 00:14:59.644 read: IOPS=2126, BW=8507KiB/s (8712kB/s)(8516KiB/1001msec) 00:14:59.644 slat (nsec): min=14453, max=40761, avg=17481.55, stdev=3693.15 00:14:59.644 clat (usec): min=149, max=424, avg=218.05, stdev=61.42 00:14:59.644 lat (usec): min=164, max=456, avg=235.53, stdev=63.06 00:14:59.644 clat percentiles (usec): 00:14:59.644 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:14:59.644 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 194], 00:14:59.644 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 306], 00:14:59.644 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 416], 99.95th=[ 424], 00:14:59.644 | 99.99th=[ 424] 00:14:59.644 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:14:59.644 slat (nsec): min=19851, max=96626, avg=25499.99, stdev=6323.08 00:14:59.644 clat (usec): min=104, max=1490, avg=165.99, stdev=53.81 00:14:59.644 lat (usec): min=124, max=1513, avg=191.49, stdev=56.47 00:14:59.644 clat percentiles (usec): 00:14:59.644 | 1.00th=[ 113], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 125], 00:14:59.644 | 30.00th=[ 128], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 163], 00:14:59.644 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 239], 00:14:59.644 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 302], 99.95th=[ 363], 00:14:59.644 | 99.99th=[ 1483] 00:14:59.644 bw ( KiB/s): min= 8192, max= 8192, per=20.99%, avg=8192.00, stdev= 0.00, samples=1 00:14:59.644 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:59.644 lat (usec) : 250=81.34%, 500=18.64% 00:14:59.644 lat (msec) : 2=0.02% 00:14:59.644 cpu : usr=2.00%, sys=7.50%, ctx=4690, majf=0, minf=11 00:14:59.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.644 issued rwts: total=2129,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.644 00:14:59.644 Run status group 0 (all jobs): 00:14:59.644 READ: bw=32.5MiB/s (34.1MB/s), 6138KiB/s-10.2MiB/s (6285kB/s-10.7MB/s), io=32.5MiB (34.1MB), run=1001-1001msec 00:14:59.644 WRITE: bw=38.1MiB/s (40.0MB/s), 6969KiB/s-12.0MiB/s (7136kB/s-12.6MB/s), io=38.1MiB (40.0MB), run=1001-1001msec 00:14:59.644 00:14:59.644 Disk stats (read/write): 00:14:59.644 nvme0n1: ios=1967/2048, merge=0/0, ticks=465/346, in_queue=811, util=88.58% 00:14:59.644 nvme0n2: ios=1365/1536, merge=0/0, ticks=452/355, in_queue=807, util=88.17% 00:14:59.644 nvme0n3: ios=2335/2560, merge=0/0, ticks=450/372, in_queue=822, util=89.89% 00:14:59.644 nvme0n4: ios=1833/2048, merge=0/0, ticks=426/377, in_queue=803, util=89.84% 00:14:59.644 08:55:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:59.644 [global] 00:14:59.644 thread=1 00:14:59.644 invalidate=1 00:14:59.644 rw=randwrite 00:14:59.644 time_based=1 00:14:59.645 runtime=1 00:14:59.645 ioengine=libaio 00:14:59.645 direct=1 00:14:59.645 bs=4096 00:14:59.645 iodepth=1 00:14:59.645 norandommap=0 00:14:59.645 numjobs=1 00:14:59.645 00:14:59.645 verify_dump=1 00:14:59.645 verify_backlog=512 00:14:59.645 verify_state_save=0 00:14:59.645 do_verify=1 00:14:59.645 verify=crc32c-intel 00:14:59.645 [job0] 00:14:59.645 filename=/dev/nvme0n1 00:14:59.645 [job1] 00:14:59.645 filename=/dev/nvme0n2 00:14:59.645 [job2] 00:14:59.645 filename=/dev/nvme0n3 00:14:59.645 [job3] 00:14:59.645 filename=/dev/nvme0n4 00:14:59.645 Could not set queue depth (nvme0n1) 00:14:59.645 Could not set queue depth (nvme0n2) 00:14:59.645 Could not set queue depth (nvme0n3) 00:14:59.645 Could not set queue depth (nvme0n4) 00:14:59.903 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:59.903 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:59.903 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:59.903 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:59.903 fio-3.35 00:14:59.903 Starting 4 threads 00:15:01.301 00:15:01.301 job0: (groupid=0, jobs=1): err= 0: pid=76332: Wed May 15 08:55:17 2024 00:15:01.301 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:01.301 slat (usec): min=8, max=426, avg=23.04, stdev=25.35 00:15:01.301 clat (usec): min=4, max=1017, avg=337.90, stdev=82.61 00:15:01.301 lat (usec): min=158, max=1047, avg=360.94, stdev=79.36 00:15:01.301 clat percentiles (usec): 00:15:01.301 | 1.00th=[ 145], 5.00th=[ 167], 10.00th=[ 241], 20.00th=[ 289], 00:15:01.301 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:15:01.301 | 70.00th=[ 371], 80.00th=[ 396], 90.00th=[ 420], 95.00th=[ 457], 00:15:01.301 | 99.00th=[ 523], 99.50th=[ 545], 99.90th=[ 1012], 99.95th=[ 1020], 00:15:01.301 | 99.99th=[ 1020] 00:15:01.301 write: IOPS=1705, BW=6821KiB/s (6985kB/s)(6828KiB/1001msec); 0 zone resets 00:15:01.301 slat (usec): min=10, max=121, avg=29.99, stdev=11.20 00:15:01.301 clat (usec): min=115, max=437, avg=226.82, stdev=40.89 00:15:01.301 lat (usec): min=141, max=463, avg=256.81, stdev=43.23 00:15:01.301 clat percentiles (usec): 00:15:01.301 | 1.00th=[ 151], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:15:01.301 | 30.00th=[ 196], 40.00th=[ 208], 50.00th=[ 229], 60.00th=[ 241], 00:15:01.301 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:15:01.301 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 355], 99.95th=[ 437], 00:15:01.301 | 99.99th=[ 437] 00:15:01.301 bw ( KiB/s): min= 8192, max= 8192, per=26.74%, avg=8192.00, stdev= 0.00, samples=1 00:15:01.301 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:01.301 lat (usec) : 10=0.19%, 50=0.03%, 250=43.11%, 500=55.44%, 750=1.17% 00:15:01.301 lat (msec) : 2=0.06% 00:15:01.301 cpu : usr=1.90%, sys=6.10%, ctx=3450, majf=0, minf=6 00:15:01.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:01.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.301 issued rwts: total=1536,1707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:01.301 job1: (groupid=0, jobs=1): err= 0: pid=76333: Wed May 15 08:55:17 2024 00:15:01.301 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:01.301 slat (usec): min=8, max=668, avg=19.09, stdev=28.61 00:15:01.301 clat (usec): min=2, max=4246, avg=359.37, stdev=181.11 00:15:01.301 lat (usec): min=179, max=4266, avg=378.46, stdev=181.09 00:15:01.301 clat percentiles (usec): 00:15:01.301 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 247], 20.00th=[ 314], 00:15:01.301 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 359], 00:15:01.301 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 429], 95.00th=[ 490], 00:15:01.301 | 99.00th=[ 553], 99.50th=[ 824], 99.90th=[ 3916], 99.95th=[ 4228], 00:15:01.301 | 99.99th=[ 4228] 00:15:01.301 write: IOPS=1599, BW=6398KiB/s (6551kB/s)(6404KiB/1001msec); 0 zone resets 00:15:01.301 slat (usec): min=13, max=144, avg=25.99, stdev= 9.65 00:15:01.301 clat (usec): min=95, max=496, avg=231.73, stdev=48.18 00:15:01.301 lat (usec): min=136, max=512, avg=257.72, stdev=45.99 00:15:01.301 clat percentiles (usec): 00:15:01.301 | 1.00th=[ 117], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 188], 00:15:01.301 | 30.00th=[ 198], 40.00th=[ 215], 50.00th=[ 233], 60.00th=[ 249], 00:15:01.301 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 306], 00:15:01.301 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 420], 99.95th=[ 498], 00:15:01.301 | 99.99th=[ 498] 00:15:01.301 bw ( KiB/s): min= 8192, max= 8192, per=26.74%, avg=8192.00, stdev= 0.00, samples=1 00:15:01.301 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:01.301 lat (usec) : 4=0.03%, 10=0.06%, 100=0.06%, 250=36.24%, 500=61.84% 00:15:01.301 lat (usec) : 750=1.47%, 1000=0.06% 00:15:01.301 lat (msec) : 2=0.10%, 4=0.10%, 10=0.03% 00:15:01.301 cpu : usr=1.80%, sys=5.00%, ctx=3251, majf=0, minf=9 00:15:01.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:01.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.301 issued rwts: total=1536,1601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:01.301 job2: (groupid=0, jobs=1): err= 0: pid=76334: Wed May 15 08:55:17 2024 00:15:01.301 read: IOPS=1650, BW=6601KiB/s (6760kB/s)(6608KiB/1001msec) 00:15:01.301 slat (usec): min=8, max=210, avg=20.95, stdev= 8.83 00:15:01.301 clat (usec): min=41, max=3168, avg=310.36, stdev=143.79 00:15:01.301 lat (usec): min=169, max=3198, avg=331.31, stdev=144.10 00:15:01.301 clat percentiles (usec): 00:15:01.301 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 174], 00:15:01.301 | 30.00th=[ 194], 40.00th=[ 265], 50.00th=[ 314], 60.00th=[ 343], 00:15:01.301 | 70.00th=[ 379], 80.00th=[ 412], 90.00th=[ 465], 95.00th=[ 490], 00:15:01.301 | 99.00th=[ 725], 99.50th=[ 766], 99.90th=[ 1074], 99.95th=[ 3163], 00:15:01.301 | 99.99th=[ 3163] 00:15:01.301 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:01.301 slat (usec): min=10, max=589, avg=26.93, stdev=15.18 00:15:01.301 clat (usec): min=112, max=617, avg=190.54, stdev=61.78 00:15:01.301 lat (usec): min=139, max=732, avg=217.47, stdev=61.03 00:15:01.301 clat percentiles (usec): 00:15:01.301 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 127], 20.00th=[ 135], 00:15:01.301 | 30.00th=[ 143], 40.00th=[ 161], 50.00th=[ 174], 60.00th=[ 190], 00:15:01.301 | 70.00th=[ 215], 80.00th=[ 251], 90.00th=[ 293], 95.00th=[ 310], 00:15:01.301 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 404], 99.95th=[ 420], 00:15:01.302 | 99.99th=[ 619] 00:15:01.302 bw ( KiB/s): min= 8792, max= 8792, per=28.70%, avg=8792.00, stdev= 0.00, samples=1 00:15:01.302 iops : min= 2200, max= 2200, avg=2200.00, stdev= 0.00, samples=1 00:15:01.302 lat (usec) : 50=0.03%, 250=60.14%, 500=38.05%, 750=1.46%, 1000=0.27% 00:15:01.302 lat (msec) : 2=0.03%, 4=0.03% 00:15:01.302 cpu : usr=1.90%, sys=6.60%, ctx=3947, majf=0, minf=9 00:15:01.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:01.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.302 issued rwts: total=1652,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:01.302 job3: (groupid=0, jobs=1): err= 0: pid=76335: Wed May 15 08:55:17 2024 00:15:01.302 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:01.302 slat (usec): min=12, max=746, avg=18.09, stdev=16.60 00:15:01.302 clat (usec): min=150, max=4213, avg=228.72, stdev=199.36 00:15:01.302 lat (usec): min=165, max=4231, avg=246.81, stdev=200.65 00:15:01.302 clat percentiles (usec): 00:15:01.302 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:15:01.302 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:15:01.302 | 70.00th=[ 188], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 408], 00:15:01.302 | 99.00th=[ 445], 99.50th=[ 523], 99.90th=[ 3621], 99.95th=[ 3916], 00:15:01.302 | 99.99th=[ 4228] 00:15:01.302 write: IOPS=2308, BW=9235KiB/s (9456kB/s)(9244KiB/1001msec); 0 zone resets 00:15:01.302 slat (usec): min=14, max=111, avg=25.85, stdev= 6.95 00:15:01.302 clat (usec): min=104, max=3192, avg=184.67, stdev=106.71 00:15:01.302 lat (usec): min=128, max=3217, avg=210.52, stdev=108.43 00:15:01.302 clat percentiles (usec): 00:15:01.302 | 1.00th=[ 114], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 123], 00:15:01.302 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 147], 00:15:01.302 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 293], 00:15:01.302 | 99.00th=[ 482], 99.50th=[ 553], 99.90th=[ 979], 99.95th=[ 1188], 00:15:01.302 | 99.99th=[ 3195] 00:15:01.302 bw ( KiB/s): min= 7816, max= 7816, per=25.51%, avg=7816.00, stdev= 0.00, samples=1 00:15:01.302 iops : min= 1954, max= 1954, avg=1954.00, stdev= 0.00, samples=1 00:15:01.302 lat (usec) : 250=73.20%, 500=25.97%, 750=0.53%, 1000=0.11% 00:15:01.302 lat (msec) : 2=0.02%, 4=0.14%, 10=0.02% 00:15:01.302 cpu : usr=1.30%, sys=8.00%, ctx=4366, majf=0, minf=23 00:15:01.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:01.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.302 issued rwts: total=2048,2311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:01.302 00:15:01.302 Run status group 0 (all jobs): 00:15:01.302 READ: bw=26.4MiB/s (27.7MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=26.5MiB (27.7MB), run=1001-1001msec 00:15:01.302 WRITE: bw=29.9MiB/s (31.4MB/s), 6398KiB/s-9235KiB/s (6551kB/s-9456kB/s), io=29.9MiB (31.4MB), run=1001-1001msec 00:15:01.302 00:15:01.302 Disk stats (read/write): 00:15:01.302 nvme0n1: ios=1363/1536, merge=0/0, ticks=466/358, in_queue=824, util=88.48% 00:15:01.302 nvme0n2: ios=1294/1536, merge=0/0, ticks=433/334, in_queue=767, util=88.69% 00:15:01.302 nvme0n3: ios=1579/1720, merge=0/0, ticks=516/329, in_queue=845, util=91.27% 00:15:01.302 nvme0n4: ios=1617/2048, merge=0/0, ticks=437/395, in_queue=832, util=90.20% 00:15:01.302 08:55:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:01.302 [global] 00:15:01.302 thread=1 00:15:01.302 invalidate=1 00:15:01.302 rw=write 00:15:01.302 time_based=1 00:15:01.302 runtime=1 00:15:01.302 ioengine=libaio 00:15:01.302 direct=1 00:15:01.302 bs=4096 00:15:01.302 iodepth=128 00:15:01.302 norandommap=0 00:15:01.302 numjobs=1 00:15:01.302 00:15:01.302 verify_dump=1 00:15:01.302 verify_backlog=512 00:15:01.302 verify_state_save=0 00:15:01.302 do_verify=1 00:15:01.302 verify=crc32c-intel 00:15:01.302 [job0] 00:15:01.302 filename=/dev/nvme0n1 00:15:01.302 [job1] 00:15:01.302 filename=/dev/nvme0n2 00:15:01.302 [job2] 00:15:01.302 filename=/dev/nvme0n3 00:15:01.302 [job3] 00:15:01.302 filename=/dev/nvme0n4 00:15:01.302 Could not set queue depth (nvme0n1) 00:15:01.302 Could not set queue depth (nvme0n2) 00:15:01.302 Could not set queue depth (nvme0n3) 00:15:01.302 Could not set queue depth (nvme0n4) 00:15:01.302 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.302 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.302 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.302 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.302 fio-3.35 00:15:01.302 Starting 4 threads 00:15:02.678 00:15:02.678 job0: (groupid=0, jobs=1): err= 0: pid=76389: Wed May 15 08:55:18 2024 00:15:02.678 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:15:02.678 slat (usec): min=7, max=6516, avg=159.45, stdev=619.22 00:15:02.678 clat (usec): min=15889, max=27291, avg=21051.88, stdev=1343.42 00:15:02.678 lat (usec): min=17509, max=27317, avg=21211.33, stdev=1217.05 00:15:02.678 clat percentiles (usec): 00:15:02.678 | 1.00th=[17695], 5.00th=[18482], 10.00th=[19792], 20.00th=[20579], 00:15:02.678 | 30.00th=[20841], 40.00th=[20841], 50.00th=[20841], 60.00th=[21103], 00:15:02.678 | 70.00th=[21103], 80.00th=[21365], 90.00th=[22938], 95.00th=[23462], 00:15:02.678 | 99.00th=[24511], 99.50th=[24511], 99.90th=[27132], 99.95th=[27395], 00:15:02.678 | 99.99th=[27395] 00:15:02.678 write: IOPS=3140, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1004msec); 0 zone resets 00:15:02.678 slat (usec): min=12, max=5511, avg=153.35, stdev=689.83 00:15:02.678 clat (usec): min=472, max=25181, avg=19575.38, stdev=2742.26 00:15:02.678 lat (usec): min=4056, max=25212, avg=19728.73, stdev=2666.69 00:15:02.678 clat percentiles (usec): 00:15:02.678 | 1.00th=[ 5014], 5.00th=[16581], 10.00th=[17433], 20.00th=[19006], 00:15:02.678 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19268], 60.00th=[19530], 00:15:02.678 | 70.00th=[19530], 80.00th=[20317], 90.00th=[23725], 95.00th=[23987], 00:15:02.678 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25035], 99.95th=[25297], 00:15:02.678 | 99.99th=[25297] 00:15:02.678 bw ( KiB/s): min=12288, max=12312, per=23.02%, avg=12300.00, stdev=16.97, samples=2 00:15:02.678 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:15:02.678 lat (usec) : 500=0.02% 00:15:02.678 lat (msec) : 10=1.00%, 20=43.97%, 50=55.02% 00:15:02.678 cpu : usr=2.59%, sys=10.57%, ctx=284, majf=0, minf=13 00:15:02.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:02.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:02.678 issued rwts: total=3072,3153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:02.678 job1: (groupid=0, jobs=1): err= 0: pid=76390: Wed May 15 08:55:18 2024 00:15:02.678 read: IOPS=3266, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1005msec) 00:15:02.678 slat (usec): min=2, max=8672, avg=161.89, stdev=795.68 00:15:02.678 clat (usec): min=1326, max=33060, avg=19988.46, stdev=4838.08 00:15:02.678 lat (usec): min=7475, max=34378, avg=20150.36, stdev=4887.87 00:15:02.678 clat percentiles (usec): 00:15:02.678 | 1.00th=[ 8979], 5.00th=[10945], 10.00th=[11600], 20.00th=[16188], 00:15:02.678 | 30.00th=[18744], 40.00th=[20055], 50.00th=[20579], 60.00th=[21365], 00:15:02.678 | 70.00th=[23200], 80.00th=[24249], 90.00th=[25822], 95.00th=[26084], 00:15:02.679 | 99.00th=[28705], 99.50th=[29754], 99.90th=[31065], 99.95th=[32375], 00:15:02.679 | 99.99th=[33162] 00:15:02.679 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:15:02.679 slat (usec): min=3, max=5615, avg=123.93, stdev=527.84 00:15:02.679 clat (usec): min=6432, max=28861, avg=17094.71, stdev=3573.36 00:15:02.679 lat (usec): min=6458, max=28878, avg=17218.64, stdev=3599.08 00:15:02.679 clat percentiles (usec): 00:15:02.679 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[11076], 20.00th=[14353], 00:15:02.679 | 30.00th=[16057], 40.00th=[16909], 50.00th=[17695], 60.00th=[18220], 00:15:02.679 | 70.00th=[19006], 80.00th=[19792], 90.00th=[21627], 95.00th=[22414], 00:15:02.679 | 99.00th=[23987], 99.50th=[24511], 99.90th=[28443], 99.95th=[28443], 00:15:02.679 | 99.99th=[28967] 00:15:02.679 bw ( KiB/s): min=12288, max=16384, per=26.83%, avg=14336.00, stdev=2896.31, samples=2 00:15:02.679 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:15:02.679 lat (msec) : 2=0.01%, 10=3.28%, 20=57.59%, 50=39.11% 00:15:02.679 cpu : usr=2.69%, sys=9.66%, ctx=879, majf=0, minf=8 00:15:02.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:02.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:02.679 issued rwts: total=3283,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:02.679 job2: (groupid=0, jobs=1): err= 0: pid=76391: Wed May 15 08:55:18 2024 00:15:02.679 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:15:02.679 slat (usec): min=6, max=4812, avg=160.06, stdev=621.61 00:15:02.679 clat (usec): min=13419, max=25781, avg=20397.84, stdev=1694.22 00:15:02.679 lat (usec): min=14322, max=25799, avg=20557.90, stdev=1608.03 00:15:02.679 clat percentiles (usec): 00:15:02.679 | 1.00th=[14877], 5.00th=[16909], 10.00th=[17957], 20.00th=[19006], 00:15:02.679 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103], 00:15:02.679 | 70.00th=[21103], 80.00th=[21103], 90.00th=[21890], 95.00th=[22414], 00:15:02.679 | 99.00th=[24773], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:15:02.679 | 99.99th=[25822] 00:15:02.679 write: IOPS=3205, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1004msec); 0 zone resets 00:15:02.679 slat (usec): min=12, max=5588, avg=148.99, stdev=667.86 00:15:02.679 clat (usec): min=2480, max=27178, avg=19786.44, stdev=2938.09 00:15:02.679 lat (usec): min=6223, max=27201, avg=19935.42, stdev=2878.38 00:15:02.679 clat percentiles (usec): 00:15:02.679 | 1.00th=[ 7439], 5.00th=[14877], 10.00th=[17171], 20.00th=[18744], 00:15:02.679 | 30.00th=[19268], 40.00th=[19268], 50.00th=[19268], 60.00th=[19530], 00:15:02.679 | 70.00th=[20055], 80.00th=[22414], 90.00th=[23987], 95.00th=[24249], 00:15:02.679 | 99.00th=[25297], 99.50th=[26084], 99.90th=[27132], 99.95th=[27132], 00:15:02.679 | 99.99th=[27132] 00:15:02.679 bw ( KiB/s): min=12312, max=12432, per=23.15%, avg=12372.00, stdev=84.85, samples=2 00:15:02.679 iops : min= 3078, max= 3108, avg=3093.00, stdev=21.21, samples=2 00:15:02.679 lat (msec) : 4=0.02%, 10=0.51%, 20=49.30%, 50=50.17% 00:15:02.679 cpu : usr=3.39%, sys=10.07%, ctx=285, majf=0, minf=15 00:15:02.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:02.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:02.679 issued rwts: total=3072,3218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:02.679 job3: (groupid=0, jobs=1): err= 0: pid=76392: Wed May 15 08:55:18 2024 00:15:02.679 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:15:02.679 slat (usec): min=2, max=9482, avg=163.20, stdev=780.11 00:15:02.679 clat (usec): min=11285, max=32526, avg=21569.79, stdev=3429.54 00:15:02.679 lat (usec): min=11295, max=33135, avg=21732.99, stdev=3491.39 00:15:02.679 clat percentiles (usec): 00:15:02.679 | 1.00th=[11600], 5.00th=[13173], 10.00th=[18744], 20.00th=[19792], 00:15:02.679 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21627], 60.00th=[22676], 00:15:02.679 | 70.00th=[23462], 80.00th=[24249], 90.00th=[25560], 95.00th=[26346], 00:15:02.679 | 99.00th=[27395], 99.50th=[27657], 99.90th=[30016], 99.95th=[31327], 00:15:02.679 | 99.99th=[32637] 00:15:02.679 write: IOPS=3457, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1004msec); 0 zone resets 00:15:02.679 slat (usec): min=4, max=7241, avg=137.28, stdev=592.63 00:15:02.679 clat (usec): min=1076, max=25301, avg=17354.53, stdev=3043.11 00:15:02.679 lat (usec): min=4123, max=25930, avg=17491.80, stdev=3050.34 00:15:02.679 clat percentiles (usec): 00:15:02.679 | 1.00th=[ 8029], 5.00th=[12256], 10.00th=[13435], 20.00th=[15270], 00:15:02.679 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17433], 60.00th=[18220], 00:15:02.679 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20841], 95.00th=[21890], 00:15:02.679 | 99.00th=[23462], 99.50th=[23987], 99.90th=[24773], 99.95th=[25297], 00:15:02.679 | 99.99th=[25297] 00:15:02.679 bw ( KiB/s): min=12664, max=14108, per=25.05%, avg=13386.00, stdev=1021.06, samples=2 00:15:02.679 iops : min= 3166, max= 3527, avg=3346.50, stdev=255.27, samples=2 00:15:02.679 lat (msec) : 2=0.02%, 10=1.28%, 20=54.68%, 50=44.02% 00:15:02.679 cpu : usr=2.39%, sys=9.17%, ctx=945, majf=0, minf=7 00:15:02.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:15:02.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:02.679 issued rwts: total=3072,3471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:02.679 00:15:02.679 Run status group 0 (all jobs): 00:15:02.679 READ: bw=48.6MiB/s (50.9MB/s), 12.0MiB/s-12.8MiB/s (12.5MB/s-13.4MB/s), io=48.8MiB (51.2MB), run=1004-1005msec 00:15:02.679 WRITE: bw=52.2MiB/s (54.7MB/s), 12.3MiB/s-13.9MiB/s (12.9MB/s-14.6MB/s), io=52.4MiB (55.0MB), run=1004-1005msec 00:15:02.679 00:15:02.679 Disk stats (read/write): 00:15:02.679 nvme0n1: ios=2610/2800, merge=0/0, ticks=13178/12374, in_queue=25552, util=88.00% 00:15:02.679 nvme0n2: ios=2943/3072, merge=0/0, ticks=19423/15201, in_queue=34624, util=88.12% 00:15:02.679 nvme0n3: ios=2588/2865, merge=0/0, ticks=12896/12551, in_queue=25447, util=89.62% 00:15:02.679 nvme0n4: ios=2608/3072, merge=0/0, ticks=17032/15609, in_queue=32641, util=89.25% 00:15:02.679 08:55:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:02.679 [global] 00:15:02.679 thread=1 00:15:02.679 invalidate=1 00:15:02.679 rw=randwrite 00:15:02.679 time_based=1 00:15:02.679 runtime=1 00:15:02.679 ioengine=libaio 00:15:02.679 direct=1 00:15:02.679 bs=4096 00:15:02.679 iodepth=128 00:15:02.679 norandommap=0 00:15:02.679 numjobs=1 00:15:02.679 00:15:02.679 verify_dump=1 00:15:02.679 verify_backlog=512 00:15:02.679 verify_state_save=0 00:15:02.679 do_verify=1 00:15:02.679 verify=crc32c-intel 00:15:02.679 [job0] 00:15:02.679 filename=/dev/nvme0n1 00:15:02.679 [job1] 00:15:02.679 filename=/dev/nvme0n2 00:15:02.679 [job2] 00:15:02.679 filename=/dev/nvme0n3 00:15:02.679 [job3] 00:15:02.679 filename=/dev/nvme0n4 00:15:02.679 Could not set queue depth (nvme0n1) 00:15:02.679 Could not set queue depth (nvme0n2) 00:15:02.679 Could not set queue depth (nvme0n3) 00:15:02.679 Could not set queue depth (nvme0n4) 00:15:02.679 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:02.679 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:02.679 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:02.679 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:02.679 fio-3.35 00:15:02.679 Starting 4 threads 00:15:04.056 00:15:04.056 job0: (groupid=0, jobs=1): err= 0: pid=76456: Wed May 15 08:55:19 2024 00:15:04.056 read: IOPS=4818, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1004msec) 00:15:04.056 slat (usec): min=7, max=5832, avg=101.48, stdev=471.73 00:15:04.056 clat (usec): min=1934, max=18509, avg=12825.35, stdev=1788.96 00:15:04.056 lat (usec): min=4023, max=18543, avg=12926.83, stdev=1822.58 00:15:04.056 clat percentiles (usec): 00:15:04.056 | 1.00th=[ 7308], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[12125], 00:15:04.056 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:15:04.056 | 70.00th=[13173], 80.00th=[13829], 90.00th=[15008], 95.00th=[15926], 00:15:04.056 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:15:04.056 | 99.99th=[18482] 00:15:04.056 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:15:04.056 slat (usec): min=10, max=5006, avg=91.05, stdev=354.67 00:15:04.056 clat (usec): min=7819, max=18275, avg=12662.57, stdev=1459.86 00:15:04.056 lat (usec): min=7858, max=18386, avg=12753.61, stdev=1492.23 00:15:04.056 clat percentiles (usec): 00:15:04.056 | 1.00th=[ 8455], 5.00th=[10552], 10.00th=[11207], 20.00th=[11863], 00:15:04.056 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:15:04.056 | 70.00th=[12911], 80.00th=[13173], 90.00th=[14222], 95.00th=[15664], 00:15:04.056 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:15:04.056 | 99.99th=[18220] 00:15:04.056 bw ( KiB/s): min=20439, max=20480, per=26.67%, avg=20459.50, stdev=28.99, samples=2 00:15:04.056 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:15:04.056 lat (msec) : 2=0.01%, 10=4.99%, 20=95.00% 00:15:04.056 cpu : usr=3.69%, sys=16.95%, ctx=742, majf=0, minf=9 00:15:04.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:04.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.056 issued rwts: total=4838,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.056 job1: (groupid=0, jobs=1): err= 0: pid=76457: Wed May 15 08:55:19 2024 00:15:04.056 read: IOPS=4876, BW=19.0MiB/s (20.0MB/s)(19.1MiB/1003msec) 00:15:04.056 slat (usec): min=7, max=5984, avg=100.57, stdev=475.45 00:15:04.056 clat (usec): min=2644, max=18994, avg=12820.69, stdev=1820.16 00:15:04.056 lat (usec): min=2670, max=19027, avg=12921.27, stdev=1854.78 00:15:04.056 clat percentiles (usec): 00:15:04.056 | 1.00th=[ 8356], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11994], 00:15:04.056 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:15:04.056 | 70.00th=[13173], 80.00th=[13960], 90.00th=[15139], 95.00th=[15664], 00:15:04.056 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18482], 99.95th=[18482], 00:15:04.056 | 99.99th=[19006] 00:15:04.056 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:15:04.056 slat (usec): min=11, max=4877, avg=90.85, stdev=365.37 00:15:04.056 clat (usec): min=7674, max=18592, avg=12528.12, stdev=1434.74 00:15:04.056 lat (usec): min=7697, max=18938, avg=12618.97, stdev=1469.60 00:15:04.056 clat percentiles (usec): 00:15:04.056 | 1.00th=[ 8586], 5.00th=[10159], 10.00th=[11207], 20.00th=[11731], 00:15:04.056 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:15:04.056 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13566], 95.00th=[15401], 00:15:04.056 | 99.00th=[17433], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:15:04.056 | 99.99th=[18482] 00:15:04.056 bw ( KiB/s): min=20480, max=20480, per=26.70%, avg=20480.00, stdev= 0.00, samples=2 00:15:04.056 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:15:04.056 lat (msec) : 4=0.32%, 10=5.31%, 20=94.37% 00:15:04.056 cpu : usr=4.99%, sys=14.27%, ctx=684, majf=0, minf=13 00:15:04.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:04.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.056 issued rwts: total=4891,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.056 job2: (groupid=0, jobs=1): err= 0: pid=76458: Wed May 15 08:55:19 2024 00:15:04.056 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:15:04.056 slat (usec): min=3, max=13020, avg=126.92, stdev=800.21 00:15:04.056 clat (usec): min=5602, max=28863, avg=15837.19, stdev=4081.52 00:15:04.056 lat (usec): min=5618, max=28881, avg=15964.11, stdev=4122.82 00:15:04.056 clat percentiles (usec): 00:15:04.056 | 1.00th=[ 6194], 5.00th=[11076], 10.00th=[11863], 20.00th=[12518], 00:15:04.056 | 30.00th=[13960], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:15:04.056 | 70.00th=[16712], 80.00th=[19006], 90.00th=[21890], 95.00th=[24511], 00:15:04.056 | 99.00th=[26870], 99.50th=[27657], 99.90th=[28705], 99.95th=[28705], 00:15:04.056 | 99.99th=[28967] 00:15:04.056 write: IOPS=4506, BW=17.6MiB/s (18.5MB/s)(17.8MiB/1012msec); 0 zone resets 00:15:04.056 slat (usec): min=5, max=12169, avg=96.86, stdev=394.31 00:15:04.056 clat (usec): min=4684, max=28712, avg=13846.06, stdev=3021.09 00:15:04.056 lat (usec): min=4717, max=28723, avg=13942.92, stdev=3052.69 00:15:04.056 clat percentiles (usec): 00:15:04.056 | 1.00th=[ 5604], 5.00th=[ 7177], 10.00th=[ 8848], 20.00th=[11863], 00:15:04.056 | 30.00th=[14091], 40.00th=[14746], 50.00th=[14877], 60.00th=[15139], 00:15:04.056 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15664], 95.00th=[15795], 00:15:04.056 | 99.00th=[22152], 99.50th=[24773], 99.90th=[27132], 99.95th=[27657], 00:15:04.056 | 99.99th=[28705] 00:15:04.056 bw ( KiB/s): min=17619, max=17888, per=23.14%, avg=17753.50, stdev=190.21, samples=2 00:15:04.056 iops : min= 4404, max= 4472, avg=4438.00, stdev=48.08, samples=2 00:15:04.056 lat (msec) : 10=8.49%, 20=83.01%, 50=8.50% 00:15:04.056 cpu : usr=4.06%, sys=12.07%, ctx=639, majf=0, minf=9 00:15:04.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:04.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.056 issued rwts: total=4096,4561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.056 job3: (groupid=0, jobs=1): err= 0: pid=76459: Wed May 15 08:55:19 2024 00:15:04.056 read: IOPS=4144, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1011msec) 00:15:04.056 slat (usec): min=3, max=12951, avg=124.34, stdev=803.49 00:15:04.056 clat (usec): min=5737, max=27220, avg=15637.12, stdev=3867.99 00:15:04.056 lat (usec): min=5763, max=27238, avg=15761.45, stdev=3904.44 00:15:04.056 clat percentiles (usec): 00:15:04.056 | 1.00th=[ 6849], 5.00th=[11338], 10.00th=[11600], 20.00th=[12518], 00:15:04.056 | 30.00th=[13829], 40.00th=[14353], 50.00th=[14746], 60.00th=[15139], 00:15:04.056 | 70.00th=[16909], 80.00th=[18220], 90.00th=[21365], 95.00th=[23987], 00:15:04.056 | 99.00th=[26346], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:15:04.056 | 99.99th=[27132] 00:15:04.056 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:15:04.056 slat (usec): min=4, max=11551, avg=96.03, stdev=394.97 00:15:04.056 clat (usec): min=5064, max=27116, avg=13536.91, stdev=2879.24 00:15:04.056 lat (usec): min=5091, max=27126, avg=13632.94, stdev=2904.11 00:15:04.056 clat percentiles (usec): 00:15:04.056 | 1.00th=[ 5866], 5.00th=[ 7046], 10.00th=[ 8455], 20.00th=[11731], 00:15:04.056 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:15:04.056 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15139], 95.00th=[15401], 00:15:04.056 | 99.00th=[21890], 99.50th=[23725], 99.90th=[26870], 99.95th=[27132], 00:15:04.056 | 99.99th=[27132] 00:15:04.056 bw ( KiB/s): min=18260, max=18376, per=23.88%, avg=18318.00, stdev=82.02, samples=2 00:15:04.056 iops : min= 4565, max= 4594, avg=4579.50, stdev=20.51, samples=2 00:15:04.056 lat (msec) : 10=8.68%, 20=84.29%, 50=7.02% 00:15:04.056 cpu : usr=3.86%, sys=11.29%, ctx=683, majf=0, minf=16 00:15:04.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:04.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.056 issued rwts: total=4190,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.056 00:15:04.056 Run status group 0 (all jobs): 00:15:04.056 READ: bw=69.5MiB/s (72.9MB/s), 15.8MiB/s-19.0MiB/s (16.6MB/s-20.0MB/s), io=70.4MiB (73.8MB), run=1003-1012msec 00:15:04.056 WRITE: bw=74.9MiB/s (78.6MB/s), 17.6MiB/s-19.9MiB/s (18.5MB/s-20.9MB/s), io=75.8MiB (79.5MB), run=1003-1012msec 00:15:04.056 00:15:04.056 Disk stats (read/write): 00:15:04.056 nvme0n1: ios=4146/4351, merge=0/0, ticks=25286/24319, in_queue=49605, util=87.07% 00:15:04.056 nvme0n2: ios=4116/4463, merge=0/0, ticks=25094/24265, in_queue=49359, util=87.50% 00:15:04.056 nvme0n3: ios=3584/3679, merge=0/0, ticks=53485/48734, in_queue=102219, util=88.76% 00:15:04.056 nvme0n4: ios=3584/3807, merge=0/0, ticks=53106/49582, in_queue=102688, util=89.51% 00:15:04.056 08:55:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:04.056 08:55:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=76472 00:15:04.056 08:55:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:04.056 08:55:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:04.056 [global] 00:15:04.056 thread=1 00:15:04.056 invalidate=1 00:15:04.056 rw=read 00:15:04.056 time_based=1 00:15:04.056 runtime=10 00:15:04.056 ioengine=libaio 00:15:04.056 direct=1 00:15:04.056 bs=4096 00:15:04.056 iodepth=1 00:15:04.056 norandommap=1 00:15:04.056 numjobs=1 00:15:04.056 00:15:04.056 [job0] 00:15:04.056 filename=/dev/nvme0n1 00:15:04.056 [job1] 00:15:04.056 filename=/dev/nvme0n2 00:15:04.056 [job2] 00:15:04.056 filename=/dev/nvme0n3 00:15:04.056 [job3] 00:15:04.056 filename=/dev/nvme0n4 00:15:04.056 Could not set queue depth (nvme0n1) 00:15:04.056 Could not set queue depth (nvme0n2) 00:15:04.056 Could not set queue depth (nvme0n3) 00:15:04.056 Could not set queue depth (nvme0n4) 00:15:04.056 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.056 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.056 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.056 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.056 fio-3.35 00:15:04.056 Starting 4 threads 00:15:07.341 08:55:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:07.341 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=42184704, buflen=4096 00:15:07.341 fio: pid=76515, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:07.341 08:55:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:07.341 fio: pid=76514, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:07.341 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=66523136, buflen=4096 00:15:07.341 08:55:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:07.341 08:55:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:07.600 fio: pid=76512, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:07.600 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=6098944, buflen=4096 00:15:07.600 08:55:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:07.600 08:55:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:07.860 fio: pid=76513, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:07.860 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=56532992, buflen=4096 00:15:07.860 08:55:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:07.860 08:55:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:07.860 00:15:07.860 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76512: Wed May 15 08:55:23 2024 00:15:07.860 read: IOPS=5189, BW=20.3MiB/s (21.3MB/s)(69.8MiB/3444msec) 00:15:07.860 slat (usec): min=13, max=11827, avg=19.18, stdev=147.31 00:15:07.860 clat (usec): min=99, max=2060, avg=172.09, stdev=34.79 00:15:07.860 lat (usec): min=149, max=12084, avg=191.27, stdev=152.10 00:15:07.860 clat percentiles (usec): 00:15:07.860 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:15:07.860 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:15:07.860 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 202], 00:15:07.860 | 99.00th=[ 241], 99.50th=[ 258], 99.90th=[ 506], 99.95th=[ 865], 00:15:07.860 | 99.99th=[ 1696] 00:15:07.860 bw ( KiB/s): min=19464, max=21544, per=33.16%, avg=20874.67, stdev=764.87, samples=6 00:15:07.860 iops : min= 4866, max= 5386, avg=5218.67, stdev=191.22, samples=6 00:15:07.860 lat (usec) : 100=0.01%, 250=99.31%, 500=0.58%, 750=0.04%, 1000=0.02% 00:15:07.860 lat (msec) : 2=0.03%, 4=0.01% 00:15:07.860 cpu : usr=1.57%, sys=7.09%, ctx=17905, majf=0, minf=1 00:15:07.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.860 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.860 issued rwts: total=17874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.860 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76513: Wed May 15 08:55:23 2024 00:15:07.860 read: IOPS=3731, BW=14.6MiB/s (15.3MB/s)(53.9MiB/3699msec) 00:15:07.860 slat (usec): min=8, max=16152, avg=18.66, stdev=213.52 00:15:07.860 clat (usec): min=128, max=8012, avg=247.95, stdev=135.22 00:15:07.860 lat (usec): min=148, max=16444, avg=266.61, stdev=252.75 00:15:07.860 clat percentiles (usec): 00:15:07.860 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 165], 00:15:07.860 | 30.00th=[ 241], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 273], 00:15:07.860 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 306], 00:15:07.860 | 99.00th=[ 343], 99.50th=[ 482], 99.90th=[ 2008], 99.95th=[ 3294], 00:15:07.860 | 99.99th=[ 7439] 00:15:07.860 bw ( KiB/s): min=12704, max=19915, per=23.18%, avg=14594.71, stdev=2443.36, samples=7 00:15:07.860 iops : min= 3176, max= 4978, avg=3648.57, stdev=610.57, samples=7 00:15:07.860 lat (usec) : 250=36.95%, 500=62.63%, 750=0.18%, 1000=0.07% 00:15:07.860 lat (msec) : 2=0.07%, 4=0.08%, 10=0.02% 00:15:07.860 cpu : usr=1.19%, sys=4.65%, ctx=13812, majf=0, minf=1 00:15:07.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.860 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.860 issued rwts: total=13803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.860 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76514: Wed May 15 08:55:23 2024 00:15:07.860 read: IOPS=5058, BW=19.8MiB/s (20.7MB/s)(63.4MiB/3211msec) 00:15:07.860 slat (usec): min=13, max=14371, avg=17.67, stdev=145.96 00:15:07.860 clat (usec): min=149, max=1696, avg=178.49, stdev=26.76 00:15:07.860 lat (usec): min=163, max=14591, avg=196.16, stdev=148.97 00:15:07.860 clat percentiles (usec): 00:15:07.860 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:15:07.860 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:15:07.860 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 208], 00:15:07.860 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 302], 99.95th=[ 482], 00:15:07.860 | 99.99th=[ 1663] 00:15:07.860 bw ( KiB/s): min=18936, max=20904, per=32.43%, avg=20416.00, stdev=752.31, samples=6 00:15:07.860 iops : min= 4734, max= 5226, avg=5104.00, stdev=188.08, samples=6 00:15:07.860 lat (usec) : 250=99.32%, 500=0.63%, 750=0.02% 00:15:07.860 lat (msec) : 2=0.02% 00:15:07.860 cpu : usr=1.96%, sys=6.57%, ctx=16249, majf=0, minf=1 00:15:07.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.860 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.860 issued rwts: total=16242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.860 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76515: Wed May 15 08:55:23 2024 00:15:07.860 read: IOPS=3469, BW=13.5MiB/s (14.2MB/s)(40.2MiB/2969msec) 00:15:07.860 slat (nsec): min=8840, max=74004, avg=16556.12, stdev=5052.13 00:15:07.860 clat (usec): min=151, max=2285, avg=270.07, stdev=40.85 00:15:07.860 lat (usec): min=168, max=2305, avg=286.63, stdev=41.63 00:15:07.860 clat percentiles (usec): 00:15:07.860 | 1.00th=[ 182], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 249], 00:15:07.860 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:15:07.860 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:15:07.860 | 99.00th=[ 330], 99.50th=[ 392], 99.90th=[ 603], 99.95th=[ 799], 00:15:07.860 | 99.99th=[ 1762] 00:15:07.860 bw ( KiB/s): min=12880, max=14424, per=22.06%, avg=13889.60, stdev=635.62, samples=5 00:15:07.860 iops : min= 3220, max= 3606, avg=3472.40, stdev=158.91, samples=5 00:15:07.860 lat (usec) : 250=21.24%, 500=78.51%, 750=0.17%, 1000=0.02% 00:15:07.860 lat (msec) : 2=0.03%, 4=0.01% 00:15:07.860 cpu : usr=1.28%, sys=5.09%, ctx=10301, majf=0, minf=1 00:15:07.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.860 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.860 issued rwts: total=10300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.860 00:15:07.860 Run status group 0 (all jobs): 00:15:07.860 READ: bw=61.5MiB/s (64.5MB/s), 13.5MiB/s-20.3MiB/s (14.2MB/s-21.3MB/s), io=227MiB (238MB), run=2969-3699msec 00:15:07.860 00:15:07.861 Disk stats (read/write): 00:15:07.861 nvme0n1: ios=17448/0, merge=0/0, ticks=3074/0, in_queue=3074, util=95.33% 00:15:07.861 nvme0n2: ios=13332/0, merge=0/0, ticks=3213/0, in_queue=3213, util=95.13% 00:15:07.861 nvme0n3: ios=15755/0, merge=0/0, ticks=2849/0, in_queue=2849, util=96.09% 00:15:07.861 nvme0n4: ios=9970/0, merge=0/0, ticks=2680/0, in_queue=2680, util=96.76% 00:15:08.119 08:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:08.119 08:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:08.379 08:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:08.379 08:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:08.639 08:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:08.639 08:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:08.898 08:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:08.898 08:55:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 76472 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.156 nvmf hotplug test: fio failed as expected 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:09.156 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:09.416 rmmod nvme_tcp 00:15:09.416 rmmod nvme_fabrics 00:15:09.416 rmmod nvme_keyring 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 75977 ']' 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 75977 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 75977 ']' 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 75977 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75977 00:15:09.416 killing process with pid 75977 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75977' 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 75977 00:15:09.416 [2024-05-15 08:55:25.617738] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:09.416 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 75977 00:15:09.674 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:09.674 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:09.674 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:09.674 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:09.674 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:09.674 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.674 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.674 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.674 08:55:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:09.674 ************************************ 00:15:09.674 END TEST nvmf_fio_target 00:15:09.674 ************************************ 00:15:09.674 00:15:09.674 real 0m19.368s 00:15:09.674 user 1m14.628s 00:15:09.674 sys 0m8.724s 00:15:09.675 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:09.675 08:55:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.675 08:55:25 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:09.675 08:55:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:09.675 08:55:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:09.675 08:55:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:09.675 ************************************ 00:15:09.675 START TEST nvmf_bdevio 00:15:09.675 ************************************ 00:15:09.675 08:55:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:09.933 * Looking for test storage... 00:15:09.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.933 08:55:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:09.934 08:55:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:09.934 Cannot find device "nvmf_tgt_br" 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:09.934 Cannot find device "nvmf_tgt_br2" 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:09.934 Cannot find device "nvmf_tgt_br" 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:09.934 Cannot find device "nvmf_tgt_br2" 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:09.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:09.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:09.934 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:10.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:10.193 00:15:10.193 --- 10.0.0.2 ping statistics --- 00:15:10.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.193 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:10.193 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:10.193 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:10.193 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:10.194 00:15:10.194 --- 10.0.0.3 ping statistics --- 00:15:10.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.194 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:10.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:10.194 00:15:10.194 --- 10.0.0.1 ping statistics --- 00:15:10.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.194 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=76834 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 76834 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 76834 ']' 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:10.194 08:55:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:10.194 [2024-05-15 08:55:26.411944] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:10.194 [2024-05-15 08:55:26.412047] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.452 [2024-05-15 08:55:26.558685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.453 [2024-05-15 08:55:26.648443] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.453 [2024-05-15 08:55:26.648496] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.453 [2024-05-15 08:55:26.648508] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.453 [2024-05-15 08:55:26.648517] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.453 [2024-05-15 08:55:26.648524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.453 [2024-05-15 08:55:26.649218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:10.453 [2024-05-15 08:55:26.649322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:10.453 [2024-05-15 08:55:26.649407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:10.453 [2024-05-15 08:55:26.649413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:11.423 [2024-05-15 08:55:27.470006] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:11.423 Malloc0 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:11.423 [2024-05-15 08:55:27.523647] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:11.423 [2024-05-15 08:55:27.524170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:11.423 { 00:15:11.423 "params": { 00:15:11.423 "name": "Nvme$subsystem", 00:15:11.423 "trtype": "$TEST_TRANSPORT", 00:15:11.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:11.423 "adrfam": "ipv4", 00:15:11.423 "trsvcid": "$NVMF_PORT", 00:15:11.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:11.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:11.423 "hdgst": ${hdgst:-false}, 00:15:11.423 "ddgst": ${ddgst:-false} 00:15:11.423 }, 00:15:11.423 "method": "bdev_nvme_attach_controller" 00:15:11.423 } 00:15:11.423 EOF 00:15:11.423 )") 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:11.423 08:55:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:11.423 "params": { 00:15:11.423 "name": "Nvme1", 00:15:11.423 "trtype": "tcp", 00:15:11.423 "traddr": "10.0.0.2", 00:15:11.423 "adrfam": "ipv4", 00:15:11.423 "trsvcid": "4420", 00:15:11.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.423 "hdgst": false, 00:15:11.423 "ddgst": false 00:15:11.423 }, 00:15:11.423 "method": "bdev_nvme_attach_controller" 00:15:11.423 }' 00:15:11.423 [2024-05-15 08:55:27.573520] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:11.423 [2024-05-15 08:55:27.573615] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76889 ] 00:15:11.691 [2024-05-15 08:55:27.707881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:11.691 [2024-05-15 08:55:27.776729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.691 [2024-05-15 08:55:27.776836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.691 [2024-05-15 08:55:27.776840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.691 I/O targets: 00:15:11.691 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:11.691 00:15:11.691 00:15:11.691 CUnit - A unit testing framework for C - Version 2.1-3 00:15:11.691 http://cunit.sourceforge.net/ 00:15:11.691 00:15:11.691 00:15:11.691 Suite: bdevio tests on: Nvme1n1 00:15:11.949 Test: blockdev write read block ...passed 00:15:11.949 Test: blockdev write zeroes read block ...passed 00:15:11.949 Test: blockdev write zeroes read no split ...passed 00:15:11.949 Test: blockdev write zeroes read split ...passed 00:15:11.949 Test: blockdev write zeroes read split partial ...passed 00:15:11.949 Test: blockdev reset ...[2024-05-15 08:55:28.030604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:11.949 [2024-05-15 08:55:28.030725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa39660 (9): Bad file descriptor 00:15:11.949 [2024-05-15 08:55:28.044382] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:11.949 passed 00:15:11.949 Test: blockdev write read 8 blocks ...passed 00:15:11.949 Test: blockdev write read size > 128k ...passed 00:15:11.950 Test: blockdev write read invalid size ...passed 00:15:11.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:11.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:11.950 Test: blockdev write read max offset ...passed 00:15:11.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:11.950 Test: blockdev writev readv 8 blocks ...passed 00:15:11.950 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.208 Test: blockdev writev readv block ...passed 00:15:12.208 Test: blockdev writev readv size > 128k ...passed 00:15:12.208 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.208 Test: blockdev comparev and writev ...[2024-05-15 08:55:28.216539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:12.208 [2024-05-15 08:55:28.216610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:12.208 [2024-05-15 08:55:28.216632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:12.208 [2024-05-15 08:55:28.216644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:12.208 [2024-05-15 08:55:28.217401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:12.208 [2024-05-15 08:55:28.217430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:12.209 [2024-05-15 08:55:28.217449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:12.209 [2024-05-15 08:55:28.217459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:12.209 [2024-05-15 08:55:28.218153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:12.209 [2024-05-15 08:55:28.218181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:12.209 [2024-05-15 08:55:28.218199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:12.209 [2024-05-15 08:55:28.218211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:12.209 [2024-05-15 08:55:28.218828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:12.209 [2024-05-15 08:55:28.218856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:12.209 [2024-05-15 08:55:28.218875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:12.209 [2024-05-15 08:55:28.218885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:12.209 passed 00:15:12.209 Test: blockdev nvme passthru rw ...passed 00:15:12.209 Test: blockdev nvme passthru vendor specific ...[2024-05-15 08:55:28.301128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:12.209 [2024-05-15 08:55:28.301176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:12.209 [2024-05-15 08:55:28.301826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:12.209 [2024-05-15 08:55:28.301855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:12.209 [2024-05-15 08:55:28.302174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:12.209 [2024-05-15 08:55:28.302203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:12.209 [2024-05-15 08:55:28.302665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:12.209 [2024-05-15 08:55:28.302692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:12.209 passed 00:15:12.209 Test: blockdev nvme admin passthru ...passed 00:15:12.209 Test: blockdev copy ...passed 00:15:12.209 00:15:12.209 Run Summary: Type Total Ran Passed Failed Inactive 00:15:12.209 suites 1 1 n/a 0 0 00:15:12.209 tests 23 23 23 0 0 00:15:12.209 asserts 152 152 152 0 n/a 00:15:12.209 00:15:12.209 Elapsed time = 0.896 seconds 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.470 rmmod nvme_tcp 00:15:12.470 rmmod nvme_fabrics 00:15:12.470 rmmod nvme_keyring 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 76834 ']' 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 76834 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 76834 ']' 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 76834 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76834 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:15:12.470 killing process with pid 76834 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76834' 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 76834 00:15:12.470 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 76834 00:15:12.470 [2024-05-15 08:55:28.673405] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:12.729 00:15:12.729 real 0m3.018s 00:15:12.729 user 0m10.740s 00:15:12.729 sys 0m0.671s 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:12.729 ************************************ 00:15:12.729 END TEST nvmf_bdevio 00:15:12.729 ************************************ 00:15:12.729 08:55:28 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:15:12.729 08:55:28 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:12.729 08:55:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:12.729 08:55:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:12.729 08:55:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:12.729 ************************************ 00:15:12.729 START TEST nvmf_bdevio_no_huge 00:15:12.729 ************************************ 00:15:12.729 08:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:12.990 * Looking for test storage... 00:15:12.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:12.990 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:12.991 Cannot find device "nvmf_tgt_br" 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.991 Cannot find device "nvmf_tgt_br2" 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:12.991 Cannot find device "nvmf_tgt_br" 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:12.991 Cannot find device "nvmf_tgt_br2" 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.991 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:13.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:15:13.250 00:15:13.250 --- 10.0.0.2 ping statistics --- 00:15:13.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.250 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:13.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:13.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:13.250 00:15:13.250 --- 10.0.0.3 ping statistics --- 00:15:13.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.250 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:13.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:13.250 00:15:13.250 --- 10.0.0.1 ping statistics --- 00:15:13.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.250 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=77067 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 77067 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 77067 ']' 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:13.250 08:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:13.508 [2024-05-15 08:55:29.502296] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:13.508 [2024-05-15 08:55:29.502392] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:13.508 [2024-05-15 08:55:29.646073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.767 [2024-05-15 08:55:29.755516] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.767 [2024-05-15 08:55:29.755613] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.767 [2024-05-15 08:55:29.755627] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.767 [2024-05-15 08:55:29.755636] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.767 [2024-05-15 08:55:29.755643] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.767 [2024-05-15 08:55:29.755816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:13.767 [2024-05-15 08:55:29.755928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:13.767 [2024-05-15 08:55:29.756475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:13.767 [2024-05-15 08:55:29.756481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.334 [2024-05-15 08:55:30.487033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.334 Malloc0 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.334 [2024-05-15 08:55:30.533105] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:14.334 [2024-05-15 08:55:30.533584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.334 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.335 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:14.335 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:14.335 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:15:14.335 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:15:14.335 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:14.335 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:14.335 { 00:15:14.335 "params": { 00:15:14.335 "name": "Nvme$subsystem", 00:15:14.335 "trtype": "$TEST_TRANSPORT", 00:15:14.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.335 "adrfam": "ipv4", 00:15:14.335 "trsvcid": "$NVMF_PORT", 00:15:14.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.335 "hdgst": ${hdgst:-false}, 00:15:14.335 "ddgst": ${ddgst:-false} 00:15:14.335 }, 00:15:14.335 "method": "bdev_nvme_attach_controller" 00:15:14.335 } 00:15:14.335 EOF 00:15:14.335 )") 00:15:14.335 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:15:14.335 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:15:14.335 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:15:14.335 08:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:14.335 "params": { 00:15:14.335 "name": "Nvme1", 00:15:14.335 "trtype": "tcp", 00:15:14.335 "traddr": "10.0.0.2", 00:15:14.335 "adrfam": "ipv4", 00:15:14.335 "trsvcid": "4420", 00:15:14.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.335 "hdgst": false, 00:15:14.335 "ddgst": false 00:15:14.335 }, 00:15:14.335 "method": "bdev_nvme_attach_controller" 00:15:14.335 }' 00:15:14.593 [2024-05-15 08:55:30.586714] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:14.593 [2024-05-15 08:55:30.586816] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid77121 ] 00:15:14.593 [2024-05-15 08:55:30.723864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:14.851 [2024-05-15 08:55:30.855981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.851 [2024-05-15 08:55:30.856055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.851 [2024-05-15 08:55:30.856065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.851 I/O targets: 00:15:14.851 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:14.851 00:15:14.851 00:15:14.852 CUnit - A unit testing framework for C - Version 2.1-3 00:15:14.852 http://cunit.sourceforge.net/ 00:15:14.852 00:15:14.852 00:15:14.852 Suite: bdevio tests on: Nvme1n1 00:15:14.852 Test: blockdev write read block ...passed 00:15:15.110 Test: blockdev write zeroes read block ...passed 00:15:15.110 Test: blockdev write zeroes read no split ...passed 00:15:15.110 Test: blockdev write zeroes read split ...passed 00:15:15.110 Test: blockdev write zeroes read split partial ...passed 00:15:15.110 Test: blockdev reset ...[2024-05-15 08:55:31.160412] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:15.110 [2024-05-15 08:55:31.160548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a88360 (9): Bad file descriptor 00:15:15.110 [2024-05-15 08:55:31.178157] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:15.110 passed 00:15:15.110 Test: blockdev write read 8 blocks ...passed 00:15:15.110 Test: blockdev write read size > 128k ...passed 00:15:15.110 Test: blockdev write read invalid size ...passed 00:15:15.110 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:15.110 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:15.110 Test: blockdev write read max offset ...passed 00:15:15.110 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:15.110 Test: blockdev writev readv 8 blocks ...passed 00:15:15.110 Test: blockdev writev readv 30 x 1block ...passed 00:15:15.368 Test: blockdev writev readv block ...passed 00:15:15.368 Test: blockdev writev readv size > 128k ...passed 00:15:15.368 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:15.368 Test: blockdev comparev and writev ...[2024-05-15 08:55:31.352097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.368 [2024-05-15 08:55:31.352171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:15.368 [2024-05-15 08:55:31.352195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.368 [2024-05-15 08:55:31.352207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:15.368 [2024-05-15 08:55:31.352543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.368 [2024-05-15 08:55:31.352585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:15.368 [2024-05-15 08:55:31.352605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.368 [2024-05-15 08:55:31.352617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:15.368 [2024-05-15 08:55:31.353063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.368 [2024-05-15 08:55:31.353096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:15.368 [2024-05-15 08:55:31.353116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.368 [2024-05-15 08:55:31.353127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:15.368 [2024-05-15 08:55:31.353486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.368 [2024-05-15 08:55:31.353518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:15.368 [2024-05-15 08:55:31.353538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.368 [2024-05-15 08:55:31.353549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:15.368 passed 00:15:15.368 Test: blockdev nvme passthru rw ...passed 00:15:15.368 Test: blockdev nvme passthru vendor specific ...[2024-05-15 08:55:31.437065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:15.368 [2024-05-15 08:55:31.437125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:15.368 [2024-05-15 08:55:31.437268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:15.368 [2024-05-15 08:55:31.437287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:15.368 [2024-05-15 08:55:31.437398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:15.368 [2024-05-15 08:55:31.437415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:15.368 [2024-05-15 08:55:31.437533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:15.368 [2024-05-15 08:55:31.437549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:15.368 passed 00:15:15.368 Test: blockdev nvme admin passthru ...passed 00:15:15.368 Test: blockdev copy ...passed 00:15:15.368 00:15:15.368 Run Summary: Type Total Ran Passed Failed Inactive 00:15:15.368 suites 1 1 n/a 0 0 00:15:15.368 tests 23 23 23 0 0 00:15:15.368 asserts 152 152 152 0 n/a 00:15:15.368 00:15:15.368 Elapsed time = 0.931 seconds 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.935 rmmod nvme_tcp 00:15:15.935 rmmod nvme_fabrics 00:15:15.935 rmmod nvme_keyring 00:15:15.935 08:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 77067 ']' 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 77067 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 77067 ']' 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 77067 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77067 00:15:15.935 killing process with pid 77067 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77067' 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 77067 00:15:15.935 [2024-05-15 08:55:32.025028] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:15.935 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 77067 00:15:16.211 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.211 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.211 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.211 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.211 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.211 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.211 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.211 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.481 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:16.481 00:15:16.481 real 0m3.493s 00:15:16.481 user 0m12.505s 00:15:16.481 sys 0m1.213s 00:15:16.481 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:16.481 08:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:16.481 ************************************ 00:15:16.481 END TEST nvmf_bdevio_no_huge 00:15:16.481 ************************************ 00:15:16.481 08:55:32 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:16.481 08:55:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:16.481 08:55:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:16.481 08:55:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.481 ************************************ 00:15:16.481 START TEST nvmf_tls 00:15:16.481 ************************************ 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:16.481 * Looking for test storage... 00:15:16.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:16.481 Cannot find device "nvmf_tgt_br" 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.481 Cannot find device "nvmf_tgt_br2" 00:15:16.481 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:15:16.482 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:16.482 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:16.482 Cannot find device "nvmf_tgt_br" 00:15:16.482 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:15:16.482 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:16.482 Cannot find device "nvmf_tgt_br2" 00:15:16.482 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:15:16.482 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:16.482 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:16.740 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.740 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:16.740 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:16.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:16.741 00:15:16.741 --- 10.0.0.2 ping statistics --- 00:15:16.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.741 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:16.741 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.741 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:15:16.741 00:15:16.741 --- 10.0.0.3 ping statistics --- 00:15:16.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.741 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:15:16.741 00:15:16.741 --- 10.0.0.1 ping statistics --- 00:15:16.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.741 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77308 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77308 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77308 ']' 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:16.741 08:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:17.000 [2024-05-15 08:55:33.003872] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:17.000 [2024-05-15 08:55:33.003965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.000 [2024-05-15 08:55:33.141621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.000 [2024-05-15 08:55:33.199757] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.000 [2024-05-15 08:55:33.199805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.000 [2024-05-15 08:55:33.199817] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.000 [2024-05-15 08:55:33.199825] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.000 [2024-05-15 08:55:33.199833] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.000 [2024-05-15 08:55:33.199859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.000 08:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:17.000 08:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:17.000 08:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:17.000 08:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.000 08:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:17.258 08:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.258 08:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:15:17.258 08:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:17.258 true 00:15:17.516 08:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:17.516 08:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:15:17.774 08:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:15:17.774 08:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:15:17.774 08:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:18.031 08:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:18.031 08:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:15:18.290 08:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:15:18.290 08:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:15:18.290 08:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:18.548 08:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:18.548 08:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:15:18.806 08:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:15:18.806 08:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:15:18.806 08:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:15:18.806 08:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:19.064 08:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:15:19.064 08:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:15:19.064 08:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:19.322 08:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:19.322 08:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:15:19.580 08:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:15:19.580 08:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:15:19.580 08:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:19.839 08:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:19.839 08:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.i1vnhu9Pje 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:20.105 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.TjYcRKNyMu 00:15:20.106 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:20.106 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:20.106 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.i1vnhu9Pje 00:15:20.106 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TjYcRKNyMu 00:15:20.106 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:20.364 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:20.622 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.i1vnhu9Pje 00:15:20.622 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.i1vnhu9Pje 00:15:20.622 08:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:21.190 [2024-05-15 08:55:37.138090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.190 08:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:21.448 08:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:21.448 [2024-05-15 08:55:37.646185] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:21.448 [2024-05-15 08:55:37.646279] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:21.448 [2024-05-15 08:55:37.646460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.448 08:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:21.706 malloc0 00:15:21.706 08:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:21.964 08:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i1vnhu9Pje 00:15:22.224 [2024-05-15 08:55:38.440905] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:22.483 08:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.i1vnhu9Pje 00:15:32.453 Initializing NVMe Controllers 00:15:32.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:32.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:32.453 Initialization complete. Launching workers. 00:15:32.453 ======================================================== 00:15:32.453 Latency(us) 00:15:32.453 Device Information : IOPS MiB/s Average min max 00:15:32.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9201.38 35.94 6957.26 1352.23 12045.43 00:15:32.453 ======================================================== 00:15:32.453 Total : 9201.38 35.94 6957.26 1352.23 12045.43 00:15:32.453 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i1vnhu9Pje 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.i1vnhu9Pje' 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77655 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77655 /var/tmp/bdevperf.sock 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77655 ']' 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:32.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:32.453 08:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.711 [2024-05-15 08:55:48.718637] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:32.711 [2024-05-15 08:55:48.718750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77655 ] 00:15:32.711 [2024-05-15 08:55:48.856361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.711 [2024-05-15 08:55:48.917936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.969 08:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:32.969 08:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:32.969 08:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i1vnhu9Pje 00:15:33.228 [2024-05-15 08:55:49.242354] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:33.228 [2024-05-15 08:55:49.242466] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:33.228 TLSTESTn1 00:15:33.228 08:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:33.228 Running I/O for 10 seconds... 00:15:45.442 00:15:45.442 Latency(us) 00:15:45.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.442 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:45.442 Verification LBA range: start 0x0 length 0x2000 00:15:45.442 TLSTESTn1 : 10.03 3802.63 14.85 0.00 0.00 33592.66 7208.96 20137.43 00:15:45.442 =================================================================================================================== 00:15:45.442 Total : 3802.63 14.85 0.00 0.00 33592.66 7208.96 20137.43 00:15:45.442 0 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 77655 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77655 ']' 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77655 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77655 00:15:45.442 killing process with pid 77655 00:15:45.442 Received shutdown signal, test time was about 10.000000 seconds 00:15:45.442 00:15:45.442 Latency(us) 00:15:45.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.442 =================================================================================================================== 00:15:45.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77655' 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77655 00:15:45.442 [2024-05-15 08:55:59.512301] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77655 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TjYcRKNyMu 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TjYcRKNyMu 00:15:45.442 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TjYcRKNyMu 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TjYcRKNyMu' 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77793 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77793 /var/tmp/bdevperf.sock 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77793 ']' 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:45.443 08:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:45.443 [2024-05-15 08:55:59.767020] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:45.443 [2024-05-15 08:55:59.767379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77793 ] 00:15:45.443 [2024-05-15 08:55:59.904439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.443 [2024-05-15 08:55:59.963043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TjYcRKNyMu 00:15:45.443 [2024-05-15 08:56:00.333814] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:45.443 [2024-05-15 08:56:00.333952] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:45.443 [2024-05-15 08:56:00.341939] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:45.443 [2024-05-15 08:56:00.342653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f2a40 (107): Transport endpoint is not connected 00:15:45.443 [2024-05-15 08:56:00.343638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f2a40 (9): Bad file descriptor 00:15:45.443 [2024-05-15 08:56:00.344634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:45.443 [2024-05-15 08:56:00.344663] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:45.443 [2024-05-15 08:56:00.344674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:45.443 2024/05/15 08:56:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.TjYcRKNyMu subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:15:45.443 request: 00:15:45.443 { 00:15:45.443 "method": "bdev_nvme_attach_controller", 00:15:45.443 "params": { 00:15:45.443 "name": "TLSTEST", 00:15:45.443 "trtype": "tcp", 00:15:45.443 "traddr": "10.0.0.2", 00:15:45.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:45.443 "adrfam": "ipv4", 00:15:45.443 "trsvcid": "4420", 00:15:45.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:45.443 "psk": "/tmp/tmp.TjYcRKNyMu" 00:15:45.443 } 00:15:45.443 } 00:15:45.443 Got JSON-RPC error response 00:15:45.443 GoRPCClient: error on JSON-RPC call 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 77793 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77793 ']' 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77793 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77793 00:15:45.443 killing process with pid 77793 00:15:45.443 Received shutdown signal, test time was about 10.000000 seconds 00:15:45.443 00:15:45.443 Latency(us) 00:15:45.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.443 =================================================================================================================== 00:15:45.443 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77793' 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77793 00:15:45.443 [2024-05-15 08:56:00.396997] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77793 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.i1vnhu9Pje 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.i1vnhu9Pje 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.i1vnhu9Pje 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.i1vnhu9Pje' 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77825 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77825 /var/tmp/bdevperf.sock 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77825 ']' 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:45.443 [2024-05-15 08:56:00.630767] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:45.443 [2024-05-15 08:56:00.630869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77825 ] 00:15:45.443 [2024-05-15 08:56:00.767029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.443 [2024-05-15 08:56:00.826698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:45.443 08:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.i1vnhu9Pje 00:15:45.443 [2024-05-15 08:56:01.177512] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:45.443 [2024-05-15 08:56:01.177665] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:45.443 [2024-05-15 08:56:01.182715] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:45.443 [2024-05-15 08:56:01.182775] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:45.443 [2024-05-15 08:56:01.182834] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:45.443 [2024-05-15 08:56:01.183368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138da40 (107): Transport endpoint is not connected 00:15:45.443 [2024-05-15 08:56:01.184353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138da40 (9): Bad file descriptor 00:15:45.444 [2024-05-15 08:56:01.185349] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:45.444 [2024-05-15 08:56:01.185374] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:45.444 [2024-05-15 08:56:01.185386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:45.444 2024/05/15 08:56:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.i1vnhu9Pje subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:15:45.444 request: 00:15:45.444 { 00:15:45.444 "method": "bdev_nvme_attach_controller", 00:15:45.444 "params": { 00:15:45.444 "name": "TLSTEST", 00:15:45.444 "trtype": "tcp", 00:15:45.444 "traddr": "10.0.0.2", 00:15:45.444 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:45.444 "adrfam": "ipv4", 00:15:45.444 "trsvcid": "4420", 00:15:45.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:45.444 "psk": "/tmp/tmp.i1vnhu9Pje" 00:15:45.444 } 00:15:45.444 } 00:15:45.444 Got JSON-RPC error response 00:15:45.444 GoRPCClient: error on JSON-RPC call 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 77825 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77825 ']' 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77825 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77825 00:15:45.444 killing process with pid 77825 00:15:45.444 Received shutdown signal, test time was about 10.000000 seconds 00:15:45.444 00:15:45.444 Latency(us) 00:15:45.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.444 =================================================================================================================== 00:15:45.444 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77825' 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77825 00:15:45.444 [2024-05-15 08:56:01.250994] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77825 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.i1vnhu9Pje 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.i1vnhu9Pje 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.i1vnhu9Pje 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.i1vnhu9Pje' 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77857 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77857 /var/tmp/bdevperf.sock 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77857 ']' 00:15:45.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:45.444 08:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:45.444 [2024-05-15 08:56:01.493284] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:45.444 [2024-05-15 08:56:01.493383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77857 ] 00:15:45.444 [2024-05-15 08:56:01.629892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.703 [2024-05-15 08:56:01.704382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i1vnhu9Pje 00:15:46.662 [2024-05-15 08:56:02.733065] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:46.662 [2024-05-15 08:56:02.733171] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:46.662 [2024-05-15 08:56:02.742271] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:46.662 [2024-05-15 08:56:02.742315] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:46.662 [2024-05-15 08:56:02.742376] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:46.662 [2024-05-15 08:56:02.742754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe2a40 (107): Transport endpoint is not connected 00:15:46.662 [2024-05-15 08:56:02.743745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe2a40 (9): Bad file descriptor 00:15:46.662 [2024-05-15 08:56:02.744742] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:46.662 [2024-05-15 08:56:02.744765] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:46.662 [2024-05-15 08:56:02.744776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:46.662 2024/05/15 08:56:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.i1vnhu9Pje subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:15:46.662 request: 00:15:46.662 { 00:15:46.662 "method": "bdev_nvme_attach_controller", 00:15:46.662 "params": { 00:15:46.662 "name": "TLSTEST", 00:15:46.662 "trtype": "tcp", 00:15:46.662 "traddr": "10.0.0.2", 00:15:46.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:46.662 "adrfam": "ipv4", 00:15:46.662 "trsvcid": "4420", 00:15:46.662 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:46.662 "psk": "/tmp/tmp.i1vnhu9Pje" 00:15:46.662 } 00:15:46.662 } 00:15:46.662 Got JSON-RPC error response 00:15:46.662 GoRPCClient: error on JSON-RPC call 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 77857 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77857 ']' 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77857 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77857 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:46.662 killing process with pid 77857 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77857' 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77857 00:15:46.662 Received shutdown signal, test time was about 10.000000 seconds 00:15:46.662 00:15:46.662 Latency(us) 00:15:46.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.662 =================================================================================================================== 00:15:46.662 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:46.662 [2024-05-15 08:56:02.791059] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:46.662 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77857 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:46.941 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77897 00:15:46.942 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:46.942 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:46.942 08:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77897 /var/tmp/bdevperf.sock 00:15:46.942 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77897 ']' 00:15:46.942 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:46.942 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:46.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:46.942 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:46.942 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:46.942 08:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.942 [2024-05-15 08:56:03.045939] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:46.942 [2024-05-15 08:56:03.046077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77897 ] 00:15:47.198 [2024-05-15 08:56:03.186637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.198 [2024-05-15 08:56:03.261470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.763 08:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:47.763 08:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:47.763 08:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:48.022 [2024-05-15 08:56:04.212289] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:48.022 [2024-05-15 08:56:04.213587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x789a00 (9): Bad file descriptor 00:15:48.022 [2024-05-15 08:56:04.214582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:48.022 [2024-05-15 08:56:04.214606] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:48.022 [2024-05-15 08:56:04.214616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:48.022 2024/05/15 08:56:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:15:48.022 request: 00:15:48.022 { 00:15:48.022 "method": "bdev_nvme_attach_controller", 00:15:48.022 "params": { 00:15:48.022 "name": "TLSTEST", 00:15:48.022 "trtype": "tcp", 00:15:48.022 "traddr": "10.0.0.2", 00:15:48.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:48.022 "adrfam": "ipv4", 00:15:48.022 "trsvcid": "4420", 00:15:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:15:48.022 } 00:15:48.022 } 00:15:48.022 Got JSON-RPC error response 00:15:48.022 GoRPCClient: error on JSON-RPC call 00:15:48.022 08:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 77897 00:15:48.022 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77897 ']' 00:15:48.022 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77897 00:15:48.022 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:48.022 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:48.022 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77897 00:15:48.280 killing process with pid 77897 00:15:48.280 Received shutdown signal, test time was about 10.000000 seconds 00:15:48.280 00:15:48.280 Latency(us) 00:15:48.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.280 =================================================================================================================== 00:15:48.280 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77897' 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77897 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77897 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 77308 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77308 ']' 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77308 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77308 00:15:48.281 killing process with pid 77308 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77308' 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77308 00:15:48.281 [2024-05-15 08:56:04.467884] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:48.281 [2024-05-15 08:56:04.467933] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:48.281 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77308 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.jvg9kNNWpN 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.jvg9kNNWpN 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:48.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77958 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77958 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77958 ']' 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:48.539 08:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:48.798 [2024-05-15 08:56:04.789841] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:48.798 [2024-05-15 08:56:04.789950] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.798 [2024-05-15 08:56:04.930675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.798 [2024-05-15 08:56:04.989494] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.798 [2024-05-15 08:56:04.989555] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.798 [2024-05-15 08:56:04.989580] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.798 [2024-05-15 08:56:04.989589] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.798 [2024-05-15 08:56:04.989597] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.798 [2024-05-15 08:56:04.989630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.732 08:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:49.732 08:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:49.732 08:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.732 08:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:49.732 08:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:49.732 08:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.732 08:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.jvg9kNNWpN 00:15:49.732 08:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jvg9kNNWpN 00:15:49.732 08:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:49.989 [2024-05-15 08:56:06.189910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.989 08:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:50.247 08:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:50.504 [2024-05-15 08:56:06.653953] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:50.504 [2024-05-15 08:56:06.654062] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:50.504 [2024-05-15 08:56:06.654246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.504 08:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:50.762 malloc0 00:15:50.762 08:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:51.022 08:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jvg9kNNWpN 00:15:51.290 [2024-05-15 08:56:07.512985] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:51.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jvg9kNNWpN 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jvg9kNNWpN' 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=78062 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 78062 /var/tmp/bdevperf.sock 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78062 ']' 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:51.563 08:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.563 [2024-05-15 08:56:07.602062] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:51.563 [2024-05-15 08:56:07.602190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78062 ] 00:15:51.563 [2024-05-15 08:56:07.741748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.821 [2024-05-15 08:56:07.827304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.759 08:56:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:52.759 08:56:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:52.759 08:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jvg9kNNWpN 00:15:52.759 [2024-05-15 08:56:08.928031] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:52.759 [2024-05-15 08:56:08.928166] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:53.017 TLSTESTn1 00:15:53.017 08:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:53.017 Running I/O for 10 seconds... 00:16:02.982 00:16:02.982 Latency(us) 00:16:02.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.982 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:02.982 Verification LBA range: start 0x0 length 0x2000 00:16:02.982 TLSTESTn1 : 10.02 3822.62 14.93 0.00 0.00 33416.68 7506.85 27644.28 00:16:02.982 =================================================================================================================== 00:16:02.982 Total : 3822.62 14.93 0.00 0.00 33416.68 7506.85 27644.28 00:16:02.982 0 00:16:02.982 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:02.982 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 78062 00:16:02.982 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78062 ']' 00:16:02.982 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78062 00:16:02.982 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:02.982 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:02.982 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78062 00:16:03.240 killing process with pid 78062 00:16:03.241 Received shutdown signal, test time was about 10.000000 seconds 00:16:03.241 00:16:03.241 Latency(us) 00:16:03.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.241 =================================================================================================================== 00:16:03.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78062' 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78062 00:16:03.241 [2024-05-15 08:56:19.220095] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78062 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.jvg9kNNWpN 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jvg9kNNWpN 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jvg9kNNWpN 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jvg9kNNWpN 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jvg9kNNWpN' 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=78215 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 78215 /var/tmp/bdevperf.sock 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78215 ']' 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:03.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:03.241 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.499 [2024-05-15 08:56:19.474151] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:03.499 [2024-05-15 08:56:19.474240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78215 ] 00:16:03.499 [2024-05-15 08:56:19.614172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.499 [2024-05-15 08:56:19.676310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.757 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:03.757 08:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:03.757 08:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jvg9kNNWpN 00:16:03.757 [2024-05-15 08:56:19.988186] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:03.757 [2024-05-15 08:56:19.988272] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:03.757 [2024-05-15 08:56:19.988283] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.jvg9kNNWpN 00:16:04.015 2024/05/15 08:56:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.jvg9kNNWpN subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:16:04.015 request: 00:16:04.015 { 00:16:04.015 "method": "bdev_nvme_attach_controller", 00:16:04.015 "params": { 00:16:04.015 "name": "TLSTEST", 00:16:04.015 "trtype": "tcp", 00:16:04.015 "traddr": "10.0.0.2", 00:16:04.015 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:04.015 "adrfam": "ipv4", 00:16:04.015 "trsvcid": "4420", 00:16:04.015 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:04.015 "psk": "/tmp/tmp.jvg9kNNWpN" 00:16:04.015 } 00:16:04.015 } 00:16:04.015 Got JSON-RPC error response 00:16:04.015 GoRPCClient: error on JSON-RPC call 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 78215 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78215 ']' 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78215 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78215 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:04.015 killing process with pid 78215 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78215' 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78215 00:16:04.015 Received shutdown signal, test time was about 10.000000 seconds 00:16:04.015 00:16:04.015 Latency(us) 00:16:04.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.015 =================================================================================================================== 00:16:04.015 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78215 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 77958 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77958 ']' 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77958 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77958 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:04.015 killing process with pid 77958 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77958' 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77958 00:16:04.015 [2024-05-15 08:56:20.243959] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:04.015 [2024-05-15 08:56:20.244000] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:04.015 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77958 00:16:04.273 08:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:04.273 08:56:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.273 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:04.273 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.273 08:56:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78248 00:16:04.273 08:56:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:04.273 08:56:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78248 00:16:04.273 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78248 ']' 00:16:04.273 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.273 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:04.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.274 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.274 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:04.274 08:56:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.532 [2024-05-15 08:56:20.525897] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:04.532 [2024-05-15 08:56:20.525986] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.532 [2024-05-15 08:56:20.666257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.532 [2024-05-15 08:56:20.725521] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.532 [2024-05-15 08:56:20.725596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.532 [2024-05-15 08:56:20.725609] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.532 [2024-05-15 08:56:20.725617] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.532 [2024-05-15 08:56:20.725624] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.532 [2024-05-15 08:56:20.725649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.467 08:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.jvg9kNNWpN 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.jvg9kNNWpN 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.jvg9kNNWpN 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jvg9kNNWpN 00:16:05.468 08:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:05.726 [2024-05-15 08:56:21.819056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.726 08:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:05.985 08:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:06.244 [2024-05-15 08:56:22.419178] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:06.244 [2024-05-15 08:56:22.419289] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:06.244 [2024-05-15 08:56:22.419491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.244 08:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:06.502 malloc0 00:16:06.502 08:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:07.072 08:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jvg9kNNWpN 00:16:07.072 [2024-05-15 08:56:23.270416] tcp.c:3572:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:07.072 [2024-05-15 08:56:23.270462] tcp.c:3658:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:07.072 [2024-05-15 08:56:23.270497] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:07.072 2024/05/15 08:56:23 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.jvg9kNNWpN], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:07.072 request: 00:16:07.072 { 00:16:07.072 "method": "nvmf_subsystem_add_host", 00:16:07.072 "params": { 00:16:07.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.072 "host": "nqn.2016-06.io.spdk:host1", 00:16:07.072 "psk": "/tmp/tmp.jvg9kNNWpN" 00:16:07.072 } 00:16:07.072 } 00:16:07.072 Got JSON-RPC error response 00:16:07.072 GoRPCClient: error on JSON-RPC call 00:16:07.072 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:07.072 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:07.072 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:07.072 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:07.072 08:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 78248 00:16:07.072 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78248 ']' 00:16:07.072 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78248 00:16:07.072 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:07.072 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78248 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:07.329 killing process with pid 78248 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78248' 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78248 00:16:07.329 [2024-05-15 08:56:23.324247] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78248 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.jvg9kNNWpN 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78364 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78364 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:07.329 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78364 ']' 00:16:07.330 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.330 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:07.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.330 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.330 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:07.330 08:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.587 [2024-05-15 08:56:23.589290] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:07.587 [2024-05-15 08:56:23.589390] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.587 [2024-05-15 08:56:23.731439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.587 [2024-05-15 08:56:23.799837] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.587 [2024-05-15 08:56:23.800140] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.587 [2024-05-15 08:56:23.800247] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.587 [2024-05-15 08:56:23.800329] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.587 [2024-05-15 08:56:23.800454] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.587 [2024-05-15 08:56:23.800599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.522 08:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:08.522 08:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:08.522 08:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:08.522 08:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:08.522 08:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.522 08:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.522 08:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.jvg9kNNWpN 00:16:08.522 08:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jvg9kNNWpN 00:16:08.522 08:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:08.780 [2024-05-15 08:56:24.874139] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.780 08:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:09.038 08:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:09.297 [2024-05-15 08:56:25.354211] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:09.297 [2024-05-15 08:56:25.354311] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:09.297 [2024-05-15 08:56:25.354491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.297 08:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:09.556 malloc0 00:16:09.556 08:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:09.815 08:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jvg9kNNWpN 00:16:10.076 [2024-05-15 08:56:26.201094] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:10.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.076 08:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=78471 00:16:10.076 08:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:10.076 08:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.076 08:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 78471 /var/tmp/bdevperf.sock 00:16:10.076 08:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78471 ']' 00:16:10.076 08:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.076 08:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:10.076 08:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.076 08:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:10.076 08:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.076 [2024-05-15 08:56:26.278660] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:10.076 [2024-05-15 08:56:26.278764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78471 ] 00:16:10.335 [2024-05-15 08:56:26.416770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.335 [2024-05-15 08:56:26.487082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.272 08:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:11.272 08:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:11.272 08:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jvg9kNNWpN 00:16:11.531 [2024-05-15 08:56:27.579848] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:11.531 [2024-05-15 08:56:27.580410] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:11.531 TLSTESTn1 00:16:11.531 08:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:11.791 08:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:16:11.791 "subsystems": [ 00:16:11.791 { 00:16:11.791 "subsystem": "keyring", 00:16:11.791 "config": [] 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "subsystem": "iobuf", 00:16:11.791 "config": [ 00:16:11.791 { 00:16:11.791 "method": "iobuf_set_options", 00:16:11.791 "params": { 00:16:11.791 "large_bufsize": 135168, 00:16:11.791 "large_pool_count": 1024, 00:16:11.791 "small_bufsize": 8192, 00:16:11.791 "small_pool_count": 8192 00:16:11.791 } 00:16:11.791 } 00:16:11.791 ] 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "subsystem": "sock", 00:16:11.791 "config": [ 00:16:11.791 { 00:16:11.791 "method": "sock_set_default_impl", 00:16:11.791 "params": { 00:16:11.791 "impl_name": "posix" 00:16:11.791 } 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "method": "sock_impl_set_options", 00:16:11.791 "params": { 00:16:11.791 "enable_ktls": false, 00:16:11.791 "enable_placement_id": 0, 00:16:11.791 "enable_quickack": false, 00:16:11.791 "enable_recv_pipe": true, 00:16:11.791 "enable_zerocopy_send_client": false, 00:16:11.791 "enable_zerocopy_send_server": true, 00:16:11.791 "impl_name": "ssl", 00:16:11.791 "recv_buf_size": 4096, 00:16:11.791 "send_buf_size": 4096, 00:16:11.791 "tls_version": 0, 00:16:11.791 "zerocopy_threshold": 0 00:16:11.791 } 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "method": "sock_impl_set_options", 00:16:11.791 "params": { 00:16:11.791 "enable_ktls": false, 00:16:11.791 "enable_placement_id": 0, 00:16:11.791 "enable_quickack": false, 00:16:11.791 "enable_recv_pipe": true, 00:16:11.791 "enable_zerocopy_send_client": false, 00:16:11.791 "enable_zerocopy_send_server": true, 00:16:11.791 "impl_name": "posix", 00:16:11.791 "recv_buf_size": 2097152, 00:16:11.791 "send_buf_size": 2097152, 00:16:11.791 "tls_version": 0, 00:16:11.791 "zerocopy_threshold": 0 00:16:11.791 } 00:16:11.791 } 00:16:11.791 ] 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "subsystem": "vmd", 00:16:11.791 "config": [] 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "subsystem": "accel", 00:16:11.791 "config": [ 00:16:11.791 { 00:16:11.791 "method": "accel_set_options", 00:16:11.791 "params": { 00:16:11.791 "buf_count": 2048, 00:16:11.791 "large_cache_size": 16, 00:16:11.791 "sequence_count": 2048, 00:16:11.791 "small_cache_size": 128, 00:16:11.791 "task_count": 2048 00:16:11.791 } 00:16:11.791 } 00:16:11.791 ] 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "subsystem": "bdev", 00:16:11.791 "config": [ 00:16:11.791 { 00:16:11.791 "method": "bdev_set_options", 00:16:11.791 "params": { 00:16:11.791 "bdev_auto_examine": true, 00:16:11.791 "bdev_io_cache_size": 256, 00:16:11.791 "bdev_io_pool_size": 65535, 00:16:11.791 "iobuf_large_cache_size": 16, 00:16:11.791 "iobuf_small_cache_size": 128 00:16:11.791 } 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "method": "bdev_raid_set_options", 00:16:11.791 "params": { 00:16:11.791 "process_window_size_kb": 1024 00:16:11.791 } 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "method": "bdev_iscsi_set_options", 00:16:11.791 "params": { 00:16:11.791 "timeout_sec": 30 00:16:11.791 } 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "method": "bdev_nvme_set_options", 00:16:11.791 "params": { 00:16:11.791 "action_on_timeout": "none", 00:16:11.791 "allow_accel_sequence": false, 00:16:11.791 "arbitration_burst": 0, 00:16:11.791 "bdev_retry_count": 3, 00:16:11.791 "ctrlr_loss_timeout_sec": 0, 00:16:11.791 "delay_cmd_submit": true, 00:16:11.791 "dhchap_dhgroups": [ 00:16:11.791 "null", 00:16:11.791 "ffdhe2048", 00:16:11.791 "ffdhe3072", 00:16:11.791 "ffdhe4096", 00:16:11.791 "ffdhe6144", 00:16:11.791 "ffdhe8192" 00:16:11.791 ], 00:16:11.791 "dhchap_digests": [ 00:16:11.791 "sha256", 00:16:11.791 "sha384", 00:16:11.791 "sha512" 00:16:11.791 ], 00:16:11.791 "disable_auto_failback": false, 00:16:11.791 "fast_io_fail_timeout_sec": 0, 00:16:11.791 "generate_uuids": false, 00:16:11.791 "high_priority_weight": 0, 00:16:11.791 "io_path_stat": false, 00:16:11.791 "io_queue_requests": 0, 00:16:11.791 "keep_alive_timeout_ms": 10000, 00:16:11.791 "low_priority_weight": 0, 00:16:11.791 "medium_priority_weight": 0, 00:16:11.791 "nvme_adminq_poll_period_us": 10000, 00:16:11.791 "nvme_error_stat": false, 00:16:11.791 "nvme_ioq_poll_period_us": 0, 00:16:11.791 "rdma_cm_event_timeout_ms": 0, 00:16:11.791 "rdma_max_cq_size": 0, 00:16:11.791 "rdma_srq_size": 0, 00:16:11.791 "reconnect_delay_sec": 0, 00:16:11.791 "timeout_admin_us": 0, 00:16:11.791 "timeout_us": 0, 00:16:11.791 "transport_ack_timeout": 0, 00:16:11.791 "transport_retry_count": 4, 00:16:11.791 "transport_tos": 0 00:16:11.791 } 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "method": "bdev_nvme_set_hotplug", 00:16:11.791 "params": { 00:16:11.791 "enable": false, 00:16:11.791 "period_us": 100000 00:16:11.791 } 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "method": "bdev_malloc_create", 00:16:11.791 "params": { 00:16:11.791 "block_size": 4096, 00:16:11.791 "name": "malloc0", 00:16:11.791 "num_blocks": 8192, 00:16:11.791 "optimal_io_boundary": 0, 00:16:11.791 "physical_block_size": 4096, 00:16:11.791 "uuid": "1fbb43eb-acf4-42bf-af8c-40ac62aa433a" 00:16:11.791 } 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "method": "bdev_wait_for_examine" 00:16:11.791 } 00:16:11.791 ] 00:16:11.791 }, 00:16:11.791 { 00:16:11.791 "subsystem": "nbd", 00:16:11.791 "config": [] 00:16:11.791 }, 00:16:11.791 { 00:16:11.792 "subsystem": "scheduler", 00:16:11.792 "config": [ 00:16:11.792 { 00:16:11.792 "method": "framework_set_scheduler", 00:16:11.792 "params": { 00:16:11.792 "name": "static" 00:16:11.792 } 00:16:11.792 } 00:16:11.792 ] 00:16:11.792 }, 00:16:11.792 { 00:16:11.792 "subsystem": "nvmf", 00:16:11.792 "config": [ 00:16:11.792 { 00:16:11.792 "method": "nvmf_set_config", 00:16:11.792 "params": { 00:16:11.792 "admin_cmd_passthru": { 00:16:11.792 "identify_ctrlr": false 00:16:11.792 }, 00:16:11.792 "discovery_filter": "match_any" 00:16:11.792 } 00:16:11.792 }, 00:16:11.792 { 00:16:11.792 "method": "nvmf_set_max_subsystems", 00:16:11.792 "params": { 00:16:11.792 "max_subsystems": 1024 00:16:11.792 } 00:16:11.792 }, 00:16:11.792 { 00:16:11.792 "method": "nvmf_set_crdt", 00:16:11.792 "params": { 00:16:11.792 "crdt1": 0, 00:16:11.792 "crdt2": 0, 00:16:11.792 "crdt3": 0 00:16:11.792 } 00:16:11.792 }, 00:16:11.792 { 00:16:11.792 "method": "nvmf_create_transport", 00:16:11.792 "params": { 00:16:11.792 "abort_timeout_sec": 1, 00:16:11.792 "ack_timeout": 0, 00:16:11.792 "buf_cache_size": 4294967295, 00:16:11.792 "c2h_success": false, 00:16:11.792 "data_wr_pool_size": 0, 00:16:11.792 "dif_insert_or_strip": false, 00:16:11.792 "in_capsule_data_size": 4096, 00:16:11.792 "io_unit_size": 131072, 00:16:11.792 "max_aq_depth": 128, 00:16:11.792 "max_io_qpairs_per_ctrlr": 127, 00:16:11.792 "max_io_size": 131072, 00:16:11.792 "max_queue_depth": 128, 00:16:11.792 "num_shared_buffers": 511, 00:16:11.792 "sock_priority": 0, 00:16:11.792 "trtype": "TCP", 00:16:11.792 "zcopy": false 00:16:11.792 } 00:16:11.792 }, 00:16:11.792 { 00:16:11.792 "method": "nvmf_create_subsystem", 00:16:11.792 "params": { 00:16:11.792 "allow_any_host": false, 00:16:11.792 "ana_reporting": false, 00:16:11.792 "max_cntlid": 65519, 00:16:11.792 "max_namespaces": 10, 00:16:11.792 "min_cntlid": 1, 00:16:11.792 "model_number": "SPDK bdev Controller", 00:16:11.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.792 "serial_number": "SPDK00000000000001" 00:16:11.792 } 00:16:11.792 }, 00:16:11.792 { 00:16:11.792 "method": "nvmf_subsystem_add_host", 00:16:11.792 "params": { 00:16:11.792 "host": "nqn.2016-06.io.spdk:host1", 00:16:11.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.792 "psk": "/tmp/tmp.jvg9kNNWpN" 00:16:11.792 } 00:16:11.792 }, 00:16:11.792 { 00:16:11.792 "method": "nvmf_subsystem_add_ns", 00:16:11.792 "params": { 00:16:11.792 "namespace": { 00:16:11.792 "bdev_name": "malloc0", 00:16:11.792 "nguid": "1FBB43EBACF442BFAF8C40AC62AA433A", 00:16:11.792 "no_auto_visible": false, 00:16:11.792 "nsid": 1, 00:16:11.792 "uuid": "1fbb43eb-acf4-42bf-af8c-40ac62aa433a" 00:16:11.792 }, 00:16:11.792 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:11.792 } 00:16:11.792 }, 00:16:11.792 { 00:16:11.792 "method": "nvmf_subsystem_add_listener", 00:16:11.792 "params": { 00:16:11.792 "listen_address": { 00:16:11.792 "adrfam": "IPv4", 00:16:11.792 "traddr": "10.0.0.2", 00:16:11.792 "trsvcid": "4420", 00:16:11.792 "trtype": "TCP" 00:16:11.792 }, 00:16:11.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.792 "secure_channel": true 00:16:11.792 } 00:16:11.792 } 00:16:11.792 ] 00:16:11.792 } 00:16:11.792 ] 00:16:11.792 }' 00:16:11.792 08:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:12.361 08:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:16:12.361 "subsystems": [ 00:16:12.361 { 00:16:12.361 "subsystem": "keyring", 00:16:12.361 "config": [] 00:16:12.361 }, 00:16:12.361 { 00:16:12.361 "subsystem": "iobuf", 00:16:12.361 "config": [ 00:16:12.361 { 00:16:12.361 "method": "iobuf_set_options", 00:16:12.361 "params": { 00:16:12.362 "large_bufsize": 135168, 00:16:12.362 "large_pool_count": 1024, 00:16:12.362 "small_bufsize": 8192, 00:16:12.362 "small_pool_count": 8192 00:16:12.362 } 00:16:12.362 } 00:16:12.362 ] 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "subsystem": "sock", 00:16:12.362 "config": [ 00:16:12.362 { 00:16:12.362 "method": "sock_set_default_impl", 00:16:12.362 "params": { 00:16:12.362 "impl_name": "posix" 00:16:12.362 } 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "method": "sock_impl_set_options", 00:16:12.362 "params": { 00:16:12.362 "enable_ktls": false, 00:16:12.362 "enable_placement_id": 0, 00:16:12.362 "enable_quickack": false, 00:16:12.362 "enable_recv_pipe": true, 00:16:12.362 "enable_zerocopy_send_client": false, 00:16:12.362 "enable_zerocopy_send_server": true, 00:16:12.362 "impl_name": "ssl", 00:16:12.362 "recv_buf_size": 4096, 00:16:12.362 "send_buf_size": 4096, 00:16:12.362 "tls_version": 0, 00:16:12.362 "zerocopy_threshold": 0 00:16:12.362 } 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "method": "sock_impl_set_options", 00:16:12.362 "params": { 00:16:12.362 "enable_ktls": false, 00:16:12.362 "enable_placement_id": 0, 00:16:12.362 "enable_quickack": false, 00:16:12.362 "enable_recv_pipe": true, 00:16:12.362 "enable_zerocopy_send_client": false, 00:16:12.362 "enable_zerocopy_send_server": true, 00:16:12.362 "impl_name": "posix", 00:16:12.362 "recv_buf_size": 2097152, 00:16:12.362 "send_buf_size": 2097152, 00:16:12.362 "tls_version": 0, 00:16:12.362 "zerocopy_threshold": 0 00:16:12.362 } 00:16:12.362 } 00:16:12.362 ] 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "subsystem": "vmd", 00:16:12.362 "config": [] 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "subsystem": "accel", 00:16:12.362 "config": [ 00:16:12.362 { 00:16:12.362 "method": "accel_set_options", 00:16:12.362 "params": { 00:16:12.362 "buf_count": 2048, 00:16:12.362 "large_cache_size": 16, 00:16:12.362 "sequence_count": 2048, 00:16:12.362 "small_cache_size": 128, 00:16:12.362 "task_count": 2048 00:16:12.362 } 00:16:12.362 } 00:16:12.362 ] 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "subsystem": "bdev", 00:16:12.362 "config": [ 00:16:12.362 { 00:16:12.362 "method": "bdev_set_options", 00:16:12.362 "params": { 00:16:12.362 "bdev_auto_examine": true, 00:16:12.362 "bdev_io_cache_size": 256, 00:16:12.362 "bdev_io_pool_size": 65535, 00:16:12.362 "iobuf_large_cache_size": 16, 00:16:12.362 "iobuf_small_cache_size": 128 00:16:12.362 } 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "method": "bdev_raid_set_options", 00:16:12.362 "params": { 00:16:12.362 "process_window_size_kb": 1024 00:16:12.362 } 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "method": "bdev_iscsi_set_options", 00:16:12.362 "params": { 00:16:12.362 "timeout_sec": 30 00:16:12.362 } 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "method": "bdev_nvme_set_options", 00:16:12.362 "params": { 00:16:12.362 "action_on_timeout": "none", 00:16:12.362 "allow_accel_sequence": false, 00:16:12.362 "arbitration_burst": 0, 00:16:12.362 "bdev_retry_count": 3, 00:16:12.362 "ctrlr_loss_timeout_sec": 0, 00:16:12.362 "delay_cmd_submit": true, 00:16:12.362 "dhchap_dhgroups": [ 00:16:12.362 "null", 00:16:12.362 "ffdhe2048", 00:16:12.362 "ffdhe3072", 00:16:12.362 "ffdhe4096", 00:16:12.362 "ffdhe6144", 00:16:12.362 "ffdhe8192" 00:16:12.362 ], 00:16:12.362 "dhchap_digests": [ 00:16:12.362 "sha256", 00:16:12.362 "sha384", 00:16:12.362 "sha512" 00:16:12.362 ], 00:16:12.362 "disable_auto_failback": false, 00:16:12.362 "fast_io_fail_timeout_sec": 0, 00:16:12.362 "generate_uuids": false, 00:16:12.362 "high_priority_weight": 0, 00:16:12.362 "io_path_stat": false, 00:16:12.362 "io_queue_requests": 512, 00:16:12.362 "keep_alive_timeout_ms": 10000, 00:16:12.362 "low_priority_weight": 0, 00:16:12.362 "medium_priority_weight": 0, 00:16:12.362 "nvme_adminq_poll_period_us": 10000, 00:16:12.362 "nvme_error_stat": false, 00:16:12.362 "nvme_ioq_poll_period_us": 0, 00:16:12.362 "rdma_cm_event_timeout_ms": 0, 00:16:12.362 "rdma_max_cq_size": 0, 00:16:12.362 "rdma_srq_size": 0, 00:16:12.362 "reconnect_delay_sec": 0, 00:16:12.362 "timeout_admin_us": 0, 00:16:12.362 "timeout_us": 0, 00:16:12.362 "transport_ack_timeout": 0, 00:16:12.362 "transport_retry_count": 4, 00:16:12.362 "transport_tos": 0 00:16:12.362 } 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "method": "bdev_nvme_attach_controller", 00:16:12.362 "params": { 00:16:12.362 "adrfam": "IPv4", 00:16:12.362 "ctrlr_loss_timeout_sec": 0, 00:16:12.362 "ddgst": false, 00:16:12.362 "fast_io_fail_timeout_sec": 0, 00:16:12.362 "hdgst": false, 00:16:12.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:12.362 "name": "TLSTEST", 00:16:12.362 "prchk_guard": false, 00:16:12.362 "prchk_reftag": false, 00:16:12.362 "psk": "/tmp/tmp.jvg9kNNWpN", 00:16:12.362 "reconnect_delay_sec": 0, 00:16:12.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:12.362 "traddr": "10.0.0.2", 00:16:12.362 "trsvcid": "4420", 00:16:12.362 "trtype": "TCP" 00:16:12.362 } 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "method": "bdev_nvme_set_hotplug", 00:16:12.362 "params": { 00:16:12.362 "enable": false, 00:16:12.362 "period_us": 100000 00:16:12.362 } 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "method": "bdev_wait_for_examine" 00:16:12.362 } 00:16:12.362 ] 00:16:12.362 }, 00:16:12.362 { 00:16:12.362 "subsystem": "nbd", 00:16:12.362 "config": [] 00:16:12.362 } 00:16:12.362 ] 00:16:12.362 }' 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 78471 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78471 ']' 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78471 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78471 00:16:12.362 killing process with pid 78471 00:16:12.362 Received shutdown signal, test time was about 10.000000 seconds 00:16:12.362 00:16:12.362 Latency(us) 00:16:12.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.362 =================================================================================================================== 00:16:12.362 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78471' 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78471 00:16:12.362 [2024-05-15 08:56:28.326906] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78471 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 78364 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78364 ']' 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78364 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78364 00:16:12.362 killing process with pid 78364 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:12.362 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:12.363 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78364' 00:16:12.363 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78364 00:16:12.363 [2024-05-15 08:56:28.543958] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:12.363 [2024-05-15 08:56:28.544000] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:12.363 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78364 00:16:12.622 08:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:12.622 08:56:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:12.622 08:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:16:12.622 "subsystems": [ 00:16:12.622 { 00:16:12.622 "subsystem": "keyring", 00:16:12.622 "config": [] 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "subsystem": "iobuf", 00:16:12.622 "config": [ 00:16:12.622 { 00:16:12.622 "method": "iobuf_set_options", 00:16:12.622 "params": { 00:16:12.622 "large_bufsize": 135168, 00:16:12.622 "large_pool_count": 1024, 00:16:12.622 "small_bufsize": 8192, 00:16:12.622 "small_pool_count": 8192 00:16:12.622 } 00:16:12.622 } 00:16:12.622 ] 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "subsystem": "sock", 00:16:12.622 "config": [ 00:16:12.622 { 00:16:12.622 "method": "sock_set_default_impl", 00:16:12.622 "params": { 00:16:12.622 "impl_name": "posix" 00:16:12.622 } 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "method": "sock_impl_set_options", 00:16:12.622 "params": { 00:16:12.622 "enable_ktls": false, 00:16:12.622 "enable_placement_id": 0, 00:16:12.622 "enable_quickack": false, 00:16:12.622 "enable_recv_pipe": true, 00:16:12.622 "enable_zerocopy_send_client": false, 00:16:12.622 "enable_zerocopy_send_server": true, 00:16:12.622 "impl_name": "ssl", 00:16:12.622 "recv_buf_size": 4096, 00:16:12.622 "send_buf_size": 4096, 00:16:12.622 "tls_version": 0, 00:16:12.622 "zerocopy_threshold": 0 00:16:12.622 } 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "method": "sock_impl_set_options", 00:16:12.622 "params": { 00:16:12.622 "enable_ktls": false, 00:16:12.622 "enable_placement_id": 0, 00:16:12.622 "enable_quickack": false, 00:16:12.622 "enable_recv_pipe": true, 00:16:12.622 "enable_zerocopy_send_client": false, 00:16:12.622 "enable_zerocopy_send_server": true, 00:16:12.622 "impl_name": "posix", 00:16:12.622 "recv_buf_size": 2097152, 00:16:12.622 "send_buf_size": 2097152, 00:16:12.622 "tls_version": 0, 00:16:12.622 "zerocopy_threshold": 0 00:16:12.622 } 00:16:12.622 } 00:16:12.622 ] 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "subsystem": "vmd", 00:16:12.622 "config": [] 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "subsystem": "accel", 00:16:12.622 "config": [ 00:16:12.622 { 00:16:12.622 "method": "accel_set_options", 00:16:12.622 "params": { 00:16:12.622 "buf_count": 2048, 00:16:12.622 "large_cache_size": 16, 00:16:12.622 "sequence_count": 2048, 00:16:12.622 "small_cache_size": 128, 00:16:12.622 "task_count": 2048 00:16:12.622 } 00:16:12.622 } 00:16:12.622 ] 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "subsystem": "bdev", 00:16:12.622 "config": [ 00:16:12.622 { 00:16:12.622 "method": "bdev_set_options", 00:16:12.622 "params": { 00:16:12.622 "bdev_auto_examine": true, 00:16:12.622 "bdev_io_cache_size": 256, 00:16:12.622 "bdev_io_pool_size": 65535, 00:16:12.622 "iobuf_large_cache_size": 16, 00:16:12.622 "iobuf_small_cache_size": 128 00:16:12.622 } 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "method": "bdev_raid_set_options", 00:16:12.622 "params": { 00:16:12.622 "process_window_size_kb": 1024 00:16:12.622 } 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "method": "bdev_iscsi_set_options", 00:16:12.622 "params": { 00:16:12.622 "timeout_sec": 30 00:16:12.622 } 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "method": "bdev_nvme_set_options", 00:16:12.622 "params": { 00:16:12.622 "action_on_timeout": "none", 00:16:12.622 "allow_accel_sequence": false, 00:16:12.622 "arbitration_burst": 0, 00:16:12.622 "bdev_retry_count": 3, 00:16:12.622 "ctrlr_loss_timeout_sec": 0, 00:16:12.622 "delay_cmd_submit": true, 00:16:12.622 "dhchap_dhgroups": [ 00:16:12.622 "null", 00:16:12.622 "ffdhe2048", 00:16:12.622 "ffdhe3072", 00:16:12.622 "ffdhe4096", 00:16:12.622 "ffdhe6144", 00:16:12.622 "ffdhe8192" 00:16:12.622 ], 00:16:12.622 "dhchap_digests": [ 00:16:12.622 "sha256", 00:16:12.622 "sha384", 00:16:12.622 "sha512" 00:16:12.622 ], 00:16:12.622 "disable_auto_failback": false, 00:16:12.622 "fast_io_fail_timeout_sec": 0, 00:16:12.622 "generate_uuids": false, 00:16:12.622 "high_priority_weight": 0, 00:16:12.622 "io_path_stat": false, 00:16:12.622 "io_queue_requests": 0, 00:16:12.622 "keep_alive_timeout_ms": 10000, 00:16:12.622 "low_priority_weight": 0, 00:16:12.622 "medium_priority_weight": 0, 00:16:12.622 "nvme_adminq_poll_period_us": 10000, 00:16:12.622 "nvme_error_stat": false, 00:16:12.622 "nvme_ioq_poll_period_us": 0, 00:16:12.622 "rdma_cm_event_timeout_ms": 0, 00:16:12.622 "rdma_max_cq_size": 0, 00:16:12.622 "rdma_srq_size": 0, 00:16:12.622 "reconnect_delay_sec": 0, 00:16:12.622 "timeout_admin_us": 0, 00:16:12.622 "timeout_us": 0, 00:16:12.622 "transport_ack_timeout": 0, 00:16:12.622 "transport_retry_count": 4, 00:16:12.622 "transport_tos": 0 00:16:12.622 } 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "method": "bdev_nvme_set_hotplug", 00:16:12.622 "params": { 00:16:12.622 "enable": false, 00:16:12.622 "period_us": 100000 00:16:12.622 } 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "method": "bdev_malloc_create", 00:16:12.622 "params": { 00:16:12.622 "block_size": 4096, 00:16:12.622 "name": "malloc0", 00:16:12.622 "num_blocks": 8192, 00:16:12.622 "optimal_io_boundary": 0, 00:16:12.622 "physical_block_size": 4096, 00:16:12.622 "uuid": "1fbb43eb-acf4-42bf-af8c-40ac62aa433a" 00:16:12.622 } 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "method": "bdev_wait_for_examine" 00:16:12.622 } 00:16:12.622 ] 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "subsystem": "nbd", 00:16:12.622 "config": [] 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "subsystem": "scheduler", 00:16:12.622 "config": [ 00:16:12.622 { 00:16:12.622 "method": "framework_set_scheduler", 00:16:12.622 "params": { 00:16:12.622 "name": "static" 00:16:12.622 } 00:16:12.622 } 00:16:12.622 ] 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "subsystem": "nvmf", 00:16:12.622 "config": [ 00:16:12.622 { 00:16:12.622 "method": "nvmf_set_config", 00:16:12.622 "params": { 00:16:12.622 "admin_cmd_passthru": { 00:16:12.622 "identify_ctrlr": false 00:16:12.622 }, 00:16:12.622 "discovery_filter": "match_any" 00:16:12.622 } 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "method": "nvmf_set_max_subsystems", 00:16:12.622 "params": { 00:16:12.622 "max_subsystems": 1024 00:16:12.622 } 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "method": "nvmf_set_crdt", 00:16:12.622 "params": { 00:16:12.622 "crdt1": 0, 00:16:12.622 "crdt2": 0, 00:16:12.622 "crdt3": 0 00:16:12.622 } 00:16:12.622 }, 00:16:12.622 { 00:16:12.622 "method": "nvmf_create_transport", 00:16:12.622 "params": { 00:16:12.622 "abort_timeout_sec": 1, 00:16:12.622 "ack_timeout": 0, 00:16:12.622 "buf_cache_size": 4294967295, 00:16:12.622 "c2h_success": false, 00:16:12.622 "data_wr_pool_size": 0, 00:16:12.622 "dif_insert_or_strip": false, 00:16:12.623 "in_capsule_data_size": 4096, 00:16:12.623 "io_unit_size": 131072, 00:16:12.623 "max_aq_depth": 128, 00:16:12.623 "max_io_qpairs_per_ctrlr": 127, 00:16:12.623 "max_io_size": 131072, 00:16:12.623 "max_queue_depth": 128, 00:16:12.623 "num_shared_buffers": 511, 00:16:12.623 "sock_priority": 0, 00:16:12.623 "trtype": "TCP", 00:16:12.623 "zcopy": false 00:16:12.623 } 00:16:12.623 }, 00:16:12.623 { 00:16:12.623 "method": "nvmf_create_subsystem", 00:16:12.623 "params": { 00:16:12.623 "allow_any_host": false, 00:16:12.623 "ana_reporting": false, 00:16:12.623 "max_cntlid": 65519, 00:16:12.623 "max_namespaces": 10, 00:16:12.623 "min_cntlid": 1, 00:16:12.623 "model_number": "SPDK bdev Controller", 00:16:12.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:12.623 "serial_number": "SPDK00000000000001" 00:16:12.623 } 00:16:12.623 }, 00:16:12.623 { 00:16:12.623 "method": "nvmf_subsystem_add_host", 00:16:12.623 "params": { 00:16:12.623 "host": "nqn.2016-06.io.spdk:host1", 00:16:12.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:12.623 "psk": "/tmp/tmp.jvg9kNNWpN" 00:16:12.623 } 00:16:12.623 }, 00:16:12.623 { 00:16:12.623 "method": "nvmf_subsystem_add_ns", 00:16:12.623 "params": { 00:16:12.623 "namespace": { 00:16:12.623 "bdev_name": "malloc0", 00:16:12.623 "nguid": "1FBB43EBACF442BFAF8C40AC62AA433A", 00:16:12.623 "no_auto_visible": false, 00:16:12.623 "nsid": 1, 00:16:12.623 "uuid": "1fbb43eb-acf4-42bf-af8c-40ac62aa433a" 00:16:12.623 }, 00:16:12.623 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:12.623 } 00:16:12.623 }, 00:16:12.623 { 00:16:12.623 "method": "nvmf_subsystem_add_listener", 00:16:12.623 "params": { 00:16:12.623 "listen_address": { 00:16:12.623 "adrfam": "IPv4", 00:16:12.623 "traddr": "10.0.0.2", 00:16:12.623 "trsvcid": "4420", 00:16:12.623 "trtype": "TCP" 00:16:12.623 }, 00:16:12.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:12.623 "secure_channel": true 00:16:12.623 } 00:16:12.623 } 00:16:12.623 ] 00:16:12.623 } 00:16:12.623 ] 00:16:12.623 }' 00:16:12.623 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:12.623 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:12.623 08:56:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78545 00:16:12.623 08:56:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:12.623 08:56:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78545 00:16:12.623 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78545 ']' 00:16:12.623 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.623 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:12.623 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.623 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:12.623 08:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:12.623 [2024-05-15 08:56:28.790481] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:12.623 [2024-05-15 08:56:28.790590] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.881 [2024-05-15 08:56:28.926312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.881 [2024-05-15 08:56:28.985387] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.881 [2024-05-15 08:56:28.985446] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.881 [2024-05-15 08:56:28.985458] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.881 [2024-05-15 08:56:28.985467] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.881 [2024-05-15 08:56:28.985475] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.881 [2024-05-15 08:56:28.985583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.140 [2024-05-15 08:56:29.172698] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.140 [2024-05-15 08:56:29.188613] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:13.140 [2024-05-15 08:56:29.204602] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:13.140 [2024-05-15 08:56:29.204686] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:13.140 [2024-05-15 08:56:29.204861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=78595 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 78595 /var/tmp/bdevperf.sock 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78595 ']' 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:13.707 08:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:16:13.707 "subsystems": [ 00:16:13.707 { 00:16:13.707 "subsystem": "keyring", 00:16:13.707 "config": [] 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "subsystem": "iobuf", 00:16:13.707 "config": [ 00:16:13.707 { 00:16:13.707 "method": "iobuf_set_options", 00:16:13.707 "params": { 00:16:13.707 "large_bufsize": 135168, 00:16:13.707 "large_pool_count": 1024, 00:16:13.707 "small_bufsize": 8192, 00:16:13.707 "small_pool_count": 8192 00:16:13.707 } 00:16:13.707 } 00:16:13.707 ] 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "subsystem": "sock", 00:16:13.707 "config": [ 00:16:13.707 { 00:16:13.707 "method": "sock_set_default_impl", 00:16:13.707 "params": { 00:16:13.707 "impl_name": "posix" 00:16:13.707 } 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "method": "sock_impl_set_options", 00:16:13.707 "params": { 00:16:13.707 "enable_ktls": false, 00:16:13.707 "enable_placement_id": 0, 00:16:13.707 "enable_quickack": false, 00:16:13.707 "enable_recv_pipe": true, 00:16:13.707 "enable_zerocopy_send_client": false, 00:16:13.707 "enable_zerocopy_send_server": true, 00:16:13.707 "impl_name": "ssl", 00:16:13.707 "recv_buf_size": 4096, 00:16:13.707 "send_buf_size": 4096, 00:16:13.707 "tls_version": 0, 00:16:13.707 "zerocopy_threshold": 0 00:16:13.707 } 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "method": "sock_impl_set_options", 00:16:13.707 "params": { 00:16:13.707 "enable_ktls": false, 00:16:13.707 "enable_placement_id": 0, 00:16:13.707 "enable_quickack": false, 00:16:13.707 "enable_recv_pipe": true, 00:16:13.707 "enable_zerocopy_send_client": false, 00:16:13.707 "enable_zerocopy_send_server": true, 00:16:13.707 "impl_name": "posix", 00:16:13.707 "recv_buf_size": 2097152, 00:16:13.707 "send_buf_size": 2097152, 00:16:13.707 "tls_version": 0, 00:16:13.707 "zerocopy_threshold": 0 00:16:13.707 } 00:16:13.707 } 00:16:13.707 ] 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "subsystem": "vmd", 00:16:13.707 "config": [] 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "subsystem": "accel", 00:16:13.707 "config": [ 00:16:13.707 { 00:16:13.707 "method": "accel_set_options", 00:16:13.707 "params": { 00:16:13.707 "buf_count": 2048, 00:16:13.707 "large_cache_size": 16, 00:16:13.707 "sequence_count": 2048, 00:16:13.707 "small_cache_size": 128, 00:16:13.707 "task_count": 2048 00:16:13.707 } 00:16:13.707 } 00:16:13.707 ] 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "subsystem": "bdev", 00:16:13.707 "config": [ 00:16:13.707 { 00:16:13.707 "method": "bdev_set_options", 00:16:13.707 "params": { 00:16:13.707 "bdev_auto_examine": true, 00:16:13.707 "bdev_io_cache_size": 256, 00:16:13.707 "bdev_io_pool_size": 65535, 00:16:13.707 "iobuf_large_cache_size": 16, 00:16:13.707 "iobuf_small_cache_size": 128 00:16:13.707 } 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "method": "bdev_raid_set_options", 00:16:13.707 "params": { 00:16:13.707 "process_window_size_kb": 1024 00:16:13.707 } 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "method": "bdev_iscsi_set_options", 00:16:13.707 "params": { 00:16:13.707 "timeout_sec": 30 00:16:13.707 } 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "method": "bdev_nvme_set_options", 00:16:13.707 "params": { 00:16:13.707 "action_on_timeout": "none", 00:16:13.707 "allow_accel_sequence": false, 00:16:13.707 "arbitration_burst": 0, 00:16:13.707 "bdev_retry_count": 3, 00:16:13.707 "ctrlr_loss_timeout_sec": 0, 00:16:13.707 "delay_cmd_submit": true, 00:16:13.707 "dhchap_dhgroups": [ 00:16:13.707 "null", 00:16:13.707 "ffdhe2048", 00:16:13.707 "ffdhe3072", 00:16:13.707 "ffdhe4096", 00:16:13.707 "ffdhe6144", 00:16:13.707 "ffdhe8192" 00:16:13.707 ], 00:16:13.707 "dhchap_digests": [ 00:16:13.707 "sha256", 00:16:13.707 "sha384", 00:16:13.707 "sha512" 00:16:13.707 ], 00:16:13.707 "disable_auto_failback": false, 00:16:13.707 "fast_io_fail_timeout_sec": 0, 00:16:13.707 "generate_uuids": false, 00:16:13.707 "high_priority_weight": 0, 00:16:13.707 "io_path_stat": false, 00:16:13.707 "io_queue_requests": 512, 00:16:13.707 "keep_alive_timeout_ms": 10000, 00:16:13.707 "low_priority_weight": 0, 00:16:13.707 "medium_priority_weight": 0, 00:16:13.707 "nvme_adminq_poll_period_us": 10000, 00:16:13.707 "nvme_error_stat": false, 00:16:13.707 "nvme_ioq_poll_period_us": 0, 00:16:13.707 "rdma_cm_event_timeout_ms": 0, 00:16:13.707 "rdma_max_cq_size": 0, 00:16:13.707 "rdma_srq_size": 0, 00:16:13.707 "reconnect_delay_sec": 0, 00:16:13.707 "timeout_admin_us": 0, 00:16:13.707 "timeout_us": 0, 00:16:13.707 "transport_ack_timeout": 0, 00:16:13.707 "transport_retry_count": 4, 00:16:13.707 "transport_tos": 0 00:16:13.707 } 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "method": "bdev_nvme_attach_controller", 00:16:13.707 "params": { 00:16:13.707 "adrfam": "IPv4", 00:16:13.707 "ctrlr_loss_timeout_sec": 0, 00:16:13.707 "ddgst": false, 00:16:13.707 "fast_io_fail_timeout_sec": 0, 00:16:13.707 "hdgst": false, 00:16:13.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:13.707 "name": "TLSTEST", 00:16:13.707 "prchk_guard": false, 00:16:13.708 "prchk_reftag": false, 00:16:13.708 "psk": "/tmp/tmp.jvg9kNNWpN", 00:16:13.708 "reconnect_delay_sec": 0, 00:16:13.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.708 "traddr": "10.0.0.2", 00:16:13.708 "trsvcid": "4420", 00:16:13.708 "trtype": "TCP" 00:16:13.708 } 00:16:13.708 }, 00:16:13.708 { 00:16:13.708 "method": "bdev_nvme_set_hotplug", 00:16:13.708 "params": { 00:16:13.708 "enable": false, 00:16:13.708 "period_us": 100000 00:16:13.708 } 00:16:13.708 }, 00:16:13.708 { 00:16:13.708 "method": "bdev_wait_for_examine" 00:16:13.708 } 00:16:13.708 ] 00:16:13.708 }, 00:16:13.708 { 00:16:13.708 "subsystem": "nbd", 00:16:13.708 "config": [] 00:16:13.708 } 00:16:13.708 ] 00:16:13.708 }' 00:16:13.708 08:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:13.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:13.708 08:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:13.708 08:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:13.708 08:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.708 [2024-05-15 08:56:29.929372] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:13.708 [2024-05-15 08:56:29.929975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78595 ] 00:16:13.967 [2024-05-15 08:56:30.069529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.967 [2024-05-15 08:56:30.138987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.226 [2024-05-15 08:56:30.270406] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:14.226 [2024-05-15 08:56:30.270541] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:14.794 08:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:14.794 08:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:14.794 08:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:15.053 Running I/O for 10 seconds... 00:16:25.029 00:16:25.029 Latency(us) 00:16:25.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.029 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:25.029 Verification LBA range: start 0x0 length 0x2000 00:16:25.029 TLSTESTn1 : 10.02 3611.40 14.11 0.00 0.00 35374.89 7179.17 35746.91 00:16:25.029 =================================================================================================================== 00:16:25.029 Total : 3611.40 14.11 0.00 0.00 35374.89 7179.17 35746.91 00:16:25.029 0 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 78595 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78595 ']' 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78595 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78595 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:25.029 killing process with pid 78595 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78595' 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78595 00:16:25.029 Received shutdown signal, test time was about 10.000000 seconds 00:16:25.029 00:16:25.029 Latency(us) 00:16:25.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.029 =================================================================================================================== 00:16:25.029 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.029 [2024-05-15 08:56:41.124546] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:25.029 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78595 00:16:25.287 08:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 78545 00:16:25.287 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78545 ']' 00:16:25.287 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78545 00:16:25.287 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:25.287 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.287 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78545 00:16:25.287 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:25.287 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:25.287 killing process with pid 78545 00:16:25.287 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78545' 00:16:25.287 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78545 00:16:25.287 [2024-05-15 08:56:41.337223] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:25.287 [2024-05-15 08:56:41.337266] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:25.287 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78545 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78740 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78740 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78740 ']' 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:25.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:25.545 08:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.545 [2024-05-15 08:56:41.583700] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:25.545 [2024-05-15 08:56:41.583782] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.545 [2024-05-15 08:56:41.718657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.803 [2024-05-15 08:56:41.786190] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.803 [2024-05-15 08:56:41.786247] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.803 [2024-05-15 08:56:41.786260] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.803 [2024-05-15 08:56:41.786270] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.803 [2024-05-15 08:56:41.786279] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.803 [2024-05-15 08:56:41.786314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.370 08:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:26.370 08:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:26.370 08:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.370 08:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.371 08:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:26.371 08:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.371 08:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.jvg9kNNWpN 00:16:26.371 08:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jvg9kNNWpN 00:16:26.371 08:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:26.939 [2024-05-15 08:56:42.903899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.939 08:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:27.197 08:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:27.456 [2024-05-15 08:56:43.435962] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:27.456 [2024-05-15 08:56:43.436061] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:27.456 [2024-05-15 08:56:43.436254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.456 08:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:27.456 malloc0 00:16:27.714 08:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:27.973 08:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jvg9kNNWpN 00:16:28.232 [2024-05-15 08:56:44.222536] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:28.232 08:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=78843 00:16:28.232 08:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:28.232 08:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:28.232 08:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 78843 /var/tmp/bdevperf.sock 00:16:28.232 08:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78843 ']' 00:16:28.232 08:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.232 08:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:28.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.232 08:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.232 08:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:28.232 08:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.232 [2024-05-15 08:56:44.293719] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:28.232 [2024-05-15 08:56:44.293814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78843 ] 00:16:28.232 [2024-05-15 08:56:44.431283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.491 [2024-05-15 08:56:44.502534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.057 08:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:29.057 08:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:29.057 08:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jvg9kNNWpN 00:16:29.344 08:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:29.602 [2024-05-15 08:56:45.701744] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:29.602 nvme0n1 00:16:29.602 08:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:29.861 Running I/O for 1 seconds... 00:16:30.796 00:16:30.796 Latency(us) 00:16:30.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.796 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:30.796 Verification LBA range: start 0x0 length 0x2000 00:16:30.796 nvme0n1 : 1.02 3918.90 15.31 0.00 0.00 32281.21 7298.33 24546.21 00:16:30.796 =================================================================================================================== 00:16:30.796 Total : 3918.90 15.31 0.00 0.00 32281.21 7298.33 24546.21 00:16:30.796 0 00:16:30.796 08:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 78843 00:16:30.796 08:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78843 ']' 00:16:30.796 08:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78843 00:16:30.796 08:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:30.796 08:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:30.796 08:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78843 00:16:30.796 08:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:30.796 08:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:30.796 killing process with pid 78843 00:16:30.796 08:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78843' 00:16:30.796 Received shutdown signal, test time was about 1.000000 seconds 00:16:30.796 00:16:30.796 Latency(us) 00:16:30.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.796 =================================================================================================================== 00:16:30.796 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:30.796 08:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78843 00:16:30.796 08:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78843 00:16:31.056 08:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 78740 00:16:31.056 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78740 ']' 00:16:31.056 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78740 00:16:31.056 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:31.056 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:31.056 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78740 00:16:31.056 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:31.056 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:31.056 killing process with pid 78740 00:16:31.056 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78740' 00:16:31.056 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78740 00:16:31.056 [2024-05-15 08:56:47.199493] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:31.056 [2024-05-15 08:56:47.199538] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:31.056 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78740 00:16:31.314 08:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:16:31.314 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:31.314 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:31.314 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.314 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78918 00:16:31.314 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:31.315 08:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78918 00:16:31.315 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78918 ']' 00:16:31.315 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.315 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:31.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.315 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.315 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:31.315 08:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.315 [2024-05-15 08:56:47.459764] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:31.315 [2024-05-15 08:56:47.459869] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.573 [2024-05-15 08:56:47.595337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.573 [2024-05-15 08:56:47.653153] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.573 [2024-05-15 08:56:47.653209] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.573 [2024-05-15 08:56:47.653221] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.573 [2024-05-15 08:56:47.653230] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.573 [2024-05-15 08:56:47.653237] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.573 [2024-05-15 08:56:47.653267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.509 [2024-05-15 08:56:48.453451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.509 malloc0 00:16:32.509 [2024-05-15 08:56:48.480136] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:32.509 [2024-05-15 08:56:48.480623] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:32.509 [2024-05-15 08:56:48.480899] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=78968 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 78968 /var/tmp/bdevperf.sock 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78968 ']' 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:32.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:32.509 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.509 [2024-05-15 08:56:48.568428] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:32.509 [2024-05-15 08:56:48.568540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78968 ] 00:16:32.509 [2024-05-15 08:56:48.707939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.767 [2024-05-15 08:56:48.780583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.767 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:32.767 08:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:32.767 08:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jvg9kNNWpN 00:16:33.026 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:33.285 [2024-05-15 08:56:49.430476] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:33.285 nvme0n1 00:16:33.543 08:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:33.543 Running I/O for 1 seconds... 00:16:34.477 00:16:34.477 Latency(us) 00:16:34.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.477 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:34.477 Verification LBA range: start 0x0 length 0x2000 00:16:34.477 nvme0n1 : 1.02 3807.86 14.87 0.00 0.00 33258.48 6851.49 27763.43 00:16:34.477 =================================================================================================================== 00:16:34.477 Total : 3807.86 14.87 0.00 0.00 33258.48 6851.49 27763.43 00:16:34.477 0 00:16:34.477 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:16:34.477 08:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.477 08:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.737 08:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.737 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:16:34.737 "subsystems": [ 00:16:34.737 { 00:16:34.737 "subsystem": "keyring", 00:16:34.737 "config": [ 00:16:34.737 { 00:16:34.737 "method": "keyring_file_add_key", 00:16:34.737 "params": { 00:16:34.737 "name": "key0", 00:16:34.737 "path": "/tmp/tmp.jvg9kNNWpN" 00:16:34.737 } 00:16:34.737 } 00:16:34.737 ] 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "subsystem": "iobuf", 00:16:34.737 "config": [ 00:16:34.737 { 00:16:34.737 "method": "iobuf_set_options", 00:16:34.737 "params": { 00:16:34.737 "large_bufsize": 135168, 00:16:34.737 "large_pool_count": 1024, 00:16:34.737 "small_bufsize": 8192, 00:16:34.737 "small_pool_count": 8192 00:16:34.737 } 00:16:34.737 } 00:16:34.737 ] 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "subsystem": "sock", 00:16:34.737 "config": [ 00:16:34.737 { 00:16:34.737 "method": "sock_set_default_impl", 00:16:34.737 "params": { 00:16:34.737 "impl_name": "posix" 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "sock_impl_set_options", 00:16:34.737 "params": { 00:16:34.737 "enable_ktls": false, 00:16:34.737 "enable_placement_id": 0, 00:16:34.737 "enable_quickack": false, 00:16:34.737 "enable_recv_pipe": true, 00:16:34.737 "enable_zerocopy_send_client": false, 00:16:34.737 "enable_zerocopy_send_server": true, 00:16:34.737 "impl_name": "ssl", 00:16:34.737 "recv_buf_size": 4096, 00:16:34.737 "send_buf_size": 4096, 00:16:34.737 "tls_version": 0, 00:16:34.737 "zerocopy_threshold": 0 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "sock_impl_set_options", 00:16:34.737 "params": { 00:16:34.737 "enable_ktls": false, 00:16:34.737 "enable_placement_id": 0, 00:16:34.737 "enable_quickack": false, 00:16:34.737 "enable_recv_pipe": true, 00:16:34.737 "enable_zerocopy_send_client": false, 00:16:34.737 "enable_zerocopy_send_server": true, 00:16:34.737 "impl_name": "posix", 00:16:34.737 "recv_buf_size": 2097152, 00:16:34.737 "send_buf_size": 2097152, 00:16:34.737 "tls_version": 0, 00:16:34.737 "zerocopy_threshold": 0 00:16:34.737 } 00:16:34.737 } 00:16:34.737 ] 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "subsystem": "vmd", 00:16:34.737 "config": [] 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "subsystem": "accel", 00:16:34.737 "config": [ 00:16:34.737 { 00:16:34.737 "method": "accel_set_options", 00:16:34.737 "params": { 00:16:34.737 "buf_count": 2048, 00:16:34.737 "large_cache_size": 16, 00:16:34.737 "sequence_count": 2048, 00:16:34.737 "small_cache_size": 128, 00:16:34.737 "task_count": 2048 00:16:34.737 } 00:16:34.737 } 00:16:34.737 ] 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "subsystem": "bdev", 00:16:34.737 "config": [ 00:16:34.737 { 00:16:34.737 "method": "bdev_set_options", 00:16:34.737 "params": { 00:16:34.737 "bdev_auto_examine": true, 00:16:34.737 "bdev_io_cache_size": 256, 00:16:34.737 "bdev_io_pool_size": 65535, 00:16:34.737 "iobuf_large_cache_size": 16, 00:16:34.737 "iobuf_small_cache_size": 128 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "bdev_raid_set_options", 00:16:34.737 "params": { 00:16:34.737 "process_window_size_kb": 1024 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "bdev_iscsi_set_options", 00:16:34.737 "params": { 00:16:34.737 "timeout_sec": 30 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "bdev_nvme_set_options", 00:16:34.737 "params": { 00:16:34.737 "action_on_timeout": "none", 00:16:34.737 "allow_accel_sequence": false, 00:16:34.737 "arbitration_burst": 0, 00:16:34.737 "bdev_retry_count": 3, 00:16:34.737 "ctrlr_loss_timeout_sec": 0, 00:16:34.737 "delay_cmd_submit": true, 00:16:34.737 "dhchap_dhgroups": [ 00:16:34.737 "null", 00:16:34.737 "ffdhe2048", 00:16:34.737 "ffdhe3072", 00:16:34.737 "ffdhe4096", 00:16:34.737 "ffdhe6144", 00:16:34.737 "ffdhe8192" 00:16:34.737 ], 00:16:34.737 "dhchap_digests": [ 00:16:34.737 "sha256", 00:16:34.737 "sha384", 00:16:34.737 "sha512" 00:16:34.737 ], 00:16:34.737 "disable_auto_failback": false, 00:16:34.737 "fast_io_fail_timeout_sec": 0, 00:16:34.737 "generate_uuids": false, 00:16:34.737 "high_priority_weight": 0, 00:16:34.737 "io_path_stat": false, 00:16:34.737 "io_queue_requests": 0, 00:16:34.737 "keep_alive_timeout_ms": 10000, 00:16:34.737 "low_priority_weight": 0, 00:16:34.737 "medium_priority_weight": 0, 00:16:34.737 "nvme_adminq_poll_period_us": 10000, 00:16:34.737 "nvme_error_stat": false, 00:16:34.737 "nvme_ioq_poll_period_us": 0, 00:16:34.737 "rdma_cm_event_timeout_ms": 0, 00:16:34.737 "rdma_max_cq_size": 0, 00:16:34.737 "rdma_srq_size": 0, 00:16:34.737 "reconnect_delay_sec": 0, 00:16:34.737 "timeout_admin_us": 0, 00:16:34.737 "timeout_us": 0, 00:16:34.737 "transport_ack_timeout": 0, 00:16:34.737 "transport_retry_count": 4, 00:16:34.737 "transport_tos": 0 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "bdev_nvme_set_hotplug", 00:16:34.737 "params": { 00:16:34.737 "enable": false, 00:16:34.737 "period_us": 100000 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "bdev_malloc_create", 00:16:34.737 "params": { 00:16:34.737 "block_size": 4096, 00:16:34.737 "name": "malloc0", 00:16:34.737 "num_blocks": 8192, 00:16:34.737 "optimal_io_boundary": 0, 00:16:34.737 "physical_block_size": 4096, 00:16:34.737 "uuid": "10293e06-8eef-4057-a0a0-e858f8b655d3" 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "bdev_wait_for_examine" 00:16:34.737 } 00:16:34.737 ] 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "subsystem": "nbd", 00:16:34.737 "config": [] 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "subsystem": "scheduler", 00:16:34.737 "config": [ 00:16:34.737 { 00:16:34.737 "method": "framework_set_scheduler", 00:16:34.737 "params": { 00:16:34.737 "name": "static" 00:16:34.737 } 00:16:34.737 } 00:16:34.737 ] 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "subsystem": "nvmf", 00:16:34.737 "config": [ 00:16:34.737 { 00:16:34.737 "method": "nvmf_set_config", 00:16:34.737 "params": { 00:16:34.737 "admin_cmd_passthru": { 00:16:34.737 "identify_ctrlr": false 00:16:34.737 }, 00:16:34.737 "discovery_filter": "match_any" 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "nvmf_set_max_subsystems", 00:16:34.737 "params": { 00:16:34.737 "max_subsystems": 1024 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "nvmf_set_crdt", 00:16:34.737 "params": { 00:16:34.737 "crdt1": 0, 00:16:34.737 "crdt2": 0, 00:16:34.737 "crdt3": 0 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "nvmf_create_transport", 00:16:34.737 "params": { 00:16:34.737 "abort_timeout_sec": 1, 00:16:34.737 "ack_timeout": 0, 00:16:34.737 "buf_cache_size": 4294967295, 00:16:34.737 "c2h_success": false, 00:16:34.737 "data_wr_pool_size": 0, 00:16:34.737 "dif_insert_or_strip": false, 00:16:34.737 "in_capsule_data_size": 4096, 00:16:34.737 "io_unit_size": 131072, 00:16:34.737 "max_aq_depth": 128, 00:16:34.737 "max_io_qpairs_per_ctrlr": 127, 00:16:34.737 "max_io_size": 131072, 00:16:34.737 "max_queue_depth": 128, 00:16:34.737 "num_shared_buffers": 511, 00:16:34.737 "sock_priority": 0, 00:16:34.737 "trtype": "TCP", 00:16:34.737 "zcopy": false 00:16:34.737 } 00:16:34.737 }, 00:16:34.737 { 00:16:34.737 "method": "nvmf_create_subsystem", 00:16:34.737 "params": { 00:16:34.737 "allow_any_host": false, 00:16:34.737 "ana_reporting": false, 00:16:34.737 "max_cntlid": 65519, 00:16:34.737 "max_namespaces": 32, 00:16:34.737 "min_cntlid": 1, 00:16:34.738 "model_number": "SPDK bdev Controller", 00:16:34.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.738 "serial_number": "00000000000000000000" 00:16:34.738 } 00:16:34.738 }, 00:16:34.738 { 00:16:34.738 "method": "nvmf_subsystem_add_host", 00:16:34.738 "params": { 00:16:34.738 "host": "nqn.2016-06.io.spdk:host1", 00:16:34.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.738 "psk": "key0" 00:16:34.738 } 00:16:34.738 }, 00:16:34.738 { 00:16:34.738 "method": "nvmf_subsystem_add_ns", 00:16:34.738 "params": { 00:16:34.738 "namespace": { 00:16:34.738 "bdev_name": "malloc0", 00:16:34.738 "nguid": "10293E068EEF4057A0A0E858F8B655D3", 00:16:34.738 "no_auto_visible": false, 00:16:34.738 "nsid": 1, 00:16:34.738 "uuid": "10293e06-8eef-4057-a0a0-e858f8b655d3" 00:16:34.738 }, 00:16:34.738 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:34.738 } 00:16:34.738 }, 00:16:34.738 { 00:16:34.738 "method": "nvmf_subsystem_add_listener", 00:16:34.738 "params": { 00:16:34.738 "listen_address": { 00:16:34.738 "adrfam": "IPv4", 00:16:34.738 "traddr": "10.0.0.2", 00:16:34.738 "trsvcid": "4420", 00:16:34.738 "trtype": "TCP" 00:16:34.738 }, 00:16:34.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.738 "secure_channel": true 00:16:34.738 } 00:16:34.738 } 00:16:34.738 ] 00:16:34.738 } 00:16:34.738 ] 00:16:34.738 }' 00:16:34.738 08:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:34.997 08:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:16:34.997 "subsystems": [ 00:16:34.997 { 00:16:34.997 "subsystem": "keyring", 00:16:34.997 "config": [ 00:16:34.997 { 00:16:34.997 "method": "keyring_file_add_key", 00:16:34.997 "params": { 00:16:34.997 "name": "key0", 00:16:34.997 "path": "/tmp/tmp.jvg9kNNWpN" 00:16:34.997 } 00:16:34.997 } 00:16:34.997 ] 00:16:34.997 }, 00:16:34.997 { 00:16:34.997 "subsystem": "iobuf", 00:16:34.997 "config": [ 00:16:34.997 { 00:16:34.997 "method": "iobuf_set_options", 00:16:34.997 "params": { 00:16:34.997 "large_bufsize": 135168, 00:16:34.997 "large_pool_count": 1024, 00:16:34.997 "small_bufsize": 8192, 00:16:34.997 "small_pool_count": 8192 00:16:34.997 } 00:16:34.997 } 00:16:34.997 ] 00:16:34.997 }, 00:16:34.997 { 00:16:34.997 "subsystem": "sock", 00:16:34.997 "config": [ 00:16:34.997 { 00:16:34.997 "method": "sock_set_default_impl", 00:16:34.997 "params": { 00:16:34.997 "impl_name": "posix" 00:16:34.997 } 00:16:34.997 }, 00:16:34.997 { 00:16:34.997 "method": "sock_impl_set_options", 00:16:34.997 "params": { 00:16:34.997 "enable_ktls": false, 00:16:34.997 "enable_placement_id": 0, 00:16:34.997 "enable_quickack": false, 00:16:34.997 "enable_recv_pipe": true, 00:16:34.997 "enable_zerocopy_send_client": false, 00:16:34.997 "enable_zerocopy_send_server": true, 00:16:34.997 "impl_name": "ssl", 00:16:34.997 "recv_buf_size": 4096, 00:16:34.997 "send_buf_size": 4096, 00:16:34.997 "tls_version": 0, 00:16:34.997 "zerocopy_threshold": 0 00:16:34.997 } 00:16:34.997 }, 00:16:34.997 { 00:16:34.997 "method": "sock_impl_set_options", 00:16:34.997 "params": { 00:16:34.997 "enable_ktls": false, 00:16:34.997 "enable_placement_id": 0, 00:16:34.997 "enable_quickack": false, 00:16:34.997 "enable_recv_pipe": true, 00:16:34.997 "enable_zerocopy_send_client": false, 00:16:34.997 "enable_zerocopy_send_server": true, 00:16:34.997 "impl_name": "posix", 00:16:34.997 "recv_buf_size": 2097152, 00:16:34.997 "send_buf_size": 2097152, 00:16:34.997 "tls_version": 0, 00:16:34.997 "zerocopy_threshold": 0 00:16:34.997 } 00:16:34.997 } 00:16:34.997 ] 00:16:34.997 }, 00:16:34.997 { 00:16:34.997 "subsystem": "vmd", 00:16:34.997 "config": [] 00:16:34.997 }, 00:16:34.997 { 00:16:34.997 "subsystem": "accel", 00:16:34.997 "config": [ 00:16:34.997 { 00:16:34.997 "method": "accel_set_options", 00:16:34.997 "params": { 00:16:34.997 "buf_count": 2048, 00:16:34.997 "large_cache_size": 16, 00:16:34.997 "sequence_count": 2048, 00:16:34.997 "small_cache_size": 128, 00:16:34.997 "task_count": 2048 00:16:34.997 } 00:16:34.997 } 00:16:34.998 ] 00:16:34.998 }, 00:16:34.998 { 00:16:34.998 "subsystem": "bdev", 00:16:34.998 "config": [ 00:16:34.998 { 00:16:34.998 "method": "bdev_set_options", 00:16:34.998 "params": { 00:16:34.998 "bdev_auto_examine": true, 00:16:34.998 "bdev_io_cache_size": 256, 00:16:34.998 "bdev_io_pool_size": 65535, 00:16:34.998 "iobuf_large_cache_size": 16, 00:16:34.998 "iobuf_small_cache_size": 128 00:16:34.998 } 00:16:34.998 }, 00:16:34.998 { 00:16:34.998 "method": "bdev_raid_set_options", 00:16:34.998 "params": { 00:16:34.998 "process_window_size_kb": 1024 00:16:34.998 } 00:16:34.998 }, 00:16:34.998 { 00:16:34.998 "method": "bdev_iscsi_set_options", 00:16:34.998 "params": { 00:16:34.998 "timeout_sec": 30 00:16:34.998 } 00:16:34.998 }, 00:16:34.998 { 00:16:34.998 "method": "bdev_nvme_set_options", 00:16:34.998 "params": { 00:16:34.998 "action_on_timeout": "none", 00:16:34.998 "allow_accel_sequence": false, 00:16:34.998 "arbitration_burst": 0, 00:16:34.998 "bdev_retry_count": 3, 00:16:34.998 "ctrlr_loss_timeout_sec": 0, 00:16:34.998 "delay_cmd_submit": true, 00:16:34.998 "dhchap_dhgroups": [ 00:16:34.998 "null", 00:16:34.998 "ffdhe2048", 00:16:34.998 "ffdhe3072", 00:16:34.998 "ffdhe4096", 00:16:34.998 "ffdhe6144", 00:16:34.998 "ffdhe8192" 00:16:34.998 ], 00:16:34.998 "dhchap_digests": [ 00:16:34.998 "sha256", 00:16:34.998 "sha384", 00:16:34.998 "sha512" 00:16:34.998 ], 00:16:34.998 "disable_auto_failback": false, 00:16:34.998 "fast_io_fail_timeout_sec": 0, 00:16:34.998 "generate_uuids": false, 00:16:34.998 "high_priority_weight": 0, 00:16:34.998 "io_path_stat": false, 00:16:34.998 "io_queue_requests": 512, 00:16:34.998 "keep_alive_timeout_ms": 10000, 00:16:34.998 "low_priority_weight": 0, 00:16:34.998 "medium_priority_weight": 0, 00:16:34.998 "nvme_adminq_poll_period_us": 10000, 00:16:34.998 "nvme_error_stat": false, 00:16:34.998 "nvme_ioq_poll_period_us": 0, 00:16:34.998 "rdma_cm_event_timeout_ms": 0, 00:16:34.998 "rdma_max_cq_size": 0, 00:16:34.998 "rdma_srq_size": 0, 00:16:34.998 "reconnect_delay_sec": 0, 00:16:34.998 "timeout_admin_us": 0, 00:16:34.998 "timeout_us": 0, 00:16:34.998 "transport_ack_timeout": 0, 00:16:34.998 "transport_retry_count": 4, 00:16:34.998 "transport_tos": 0 00:16:34.998 } 00:16:34.998 }, 00:16:34.998 { 00:16:34.998 "method": "bdev_nvme_attach_controller", 00:16:34.998 "params": { 00:16:34.998 "adrfam": "IPv4", 00:16:34.998 "ctrlr_loss_timeout_sec": 0, 00:16:34.998 "ddgst": false, 00:16:34.998 "fast_io_fail_timeout_sec": 0, 00:16:34.998 "hdgst": false, 00:16:34.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.998 "name": "nvme0", 00:16:34.998 "prchk_guard": false, 00:16:34.998 "prchk_reftag": false, 00:16:34.998 "psk": "key0", 00:16:34.998 "reconnect_delay_sec": 0, 00:16:34.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.998 "traddr": "10.0.0.2", 00:16:34.998 "trsvcid": "4420", 00:16:34.998 "trtype": "TCP" 00:16:34.998 } 00:16:34.998 }, 00:16:34.998 { 00:16:34.998 "method": "bdev_nvme_set_hotplug", 00:16:34.998 "params": { 00:16:34.998 "enable": false, 00:16:34.998 "period_us": 100000 00:16:34.998 } 00:16:34.998 }, 00:16:34.998 { 00:16:34.998 "method": "bdev_enable_histogram", 00:16:34.998 "params": { 00:16:34.998 "enable": true, 00:16:34.998 "name": "nvme0n1" 00:16:34.998 } 00:16:34.998 }, 00:16:34.998 { 00:16:34.998 "method": "bdev_wait_for_examine" 00:16:34.998 } 00:16:34.998 ] 00:16:34.998 }, 00:16:34.998 { 00:16:34.998 "subsystem": "nbd", 00:16:34.998 "config": [] 00:16:34.998 } 00:16:34.998 ] 00:16:34.998 }' 00:16:34.998 08:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 78968 00:16:34.998 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78968 ']' 00:16:34.998 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78968 00:16:34.998 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:34.998 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:34.998 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78968 00:16:34.998 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:34.998 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:34.998 killing process with pid 78968 00:16:34.998 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78968' 00:16:34.998 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78968 00:16:34.998 Received shutdown signal, test time was about 1.000000 seconds 00:16:34.998 00:16:34.998 Latency(us) 00:16:34.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.998 =================================================================================================================== 00:16:34.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:34.998 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78968 00:16:35.269 08:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 78918 00:16:35.269 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78918 ']' 00:16:35.269 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78918 00:16:35.269 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:35.269 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:35.269 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78918 00:16:35.269 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:35.269 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:35.269 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78918' 00:16:35.269 killing process with pid 78918 00:16:35.269 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78918 00:16:35.269 [2024-05-15 08:56:51.393378] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:35.269 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78918 00:16:35.537 08:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:16:35.537 08:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:16:35.537 "subsystems": [ 00:16:35.537 { 00:16:35.537 "subsystem": "keyring", 00:16:35.537 "config": [ 00:16:35.537 { 00:16:35.537 "method": "keyring_file_add_key", 00:16:35.537 "params": { 00:16:35.537 "name": "key0", 00:16:35.537 "path": "/tmp/tmp.jvg9kNNWpN" 00:16:35.537 } 00:16:35.537 } 00:16:35.537 ] 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "subsystem": "iobuf", 00:16:35.537 "config": [ 00:16:35.537 { 00:16:35.537 "method": "iobuf_set_options", 00:16:35.537 "params": { 00:16:35.537 "large_bufsize": 135168, 00:16:35.537 "large_pool_count": 1024, 00:16:35.537 "small_bufsize": 8192, 00:16:35.537 "small_pool_count": 8192 00:16:35.537 } 00:16:35.537 } 00:16:35.537 ] 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "subsystem": "sock", 00:16:35.537 "config": [ 00:16:35.537 { 00:16:35.537 "method": "sock_set_default_impl", 00:16:35.537 "params": { 00:16:35.537 "impl_name": "posix" 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "method": "sock_impl_set_options", 00:16:35.537 "params": { 00:16:35.537 "enable_ktls": false, 00:16:35.537 "enable_placement_id": 0, 00:16:35.537 "enable_quickack": false, 00:16:35.537 "enable_recv_pipe": true, 00:16:35.537 "enable_zerocopy_send_client": false, 00:16:35.537 "enable_zerocopy_send_server": true, 00:16:35.537 "impl_name": "ssl", 00:16:35.537 "recv_buf_size": 4096, 00:16:35.537 "send_buf_size": 4096, 00:16:35.537 "tls_version": 0, 00:16:35.537 "zerocopy_threshold": 0 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "method": "sock_impl_set_options", 00:16:35.537 "params": { 00:16:35.537 "enable_ktls": false, 00:16:35.537 "enable_placement_id": 0, 00:16:35.537 "enable_quickack": false, 00:16:35.537 "enable_recv_pipe": true, 00:16:35.537 "enable_zerocopy_send_client": false, 00:16:35.537 "enable_zerocopy_send_server": true, 00:16:35.537 "impl_name": "posix", 00:16:35.537 "recv_buf_size": 2097152, 00:16:35.537 "send_buf_size": 2097152, 00:16:35.537 "tls_version": 0, 00:16:35.537 "zerocopy_threshold": 0 00:16:35.537 } 00:16:35.537 } 00:16:35.537 ] 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "subsystem": "vmd", 00:16:35.537 "config": [] 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "subsystem": "accel", 00:16:35.537 "config": [ 00:16:35.537 { 00:16:35.537 "method": "accel_set_options", 00:16:35.537 "params": { 00:16:35.537 "buf_count": 2048, 00:16:35.537 "large_cache_size": 16, 00:16:35.537 "sequence_count": 2048, 00:16:35.537 "small_cache_size": 128, 00:16:35.537 "task_count": 2048 00:16:35.537 } 00:16:35.537 } 00:16:35.537 ] 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "subsystem": "bdev", 00:16:35.537 "config": [ 00:16:35.537 { 00:16:35.537 "method": "bdev_set_options", 00:16:35.537 "params": { 00:16:35.537 "bdev_auto_examine": true, 00:16:35.537 "bdev_io_cache_size": 256, 00:16:35.537 "bdev_io_pool_size": 65535, 00:16:35.537 "iobuf_large_cache_size": 16, 00:16:35.537 "iobuf_small_cache_size": 128 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "method": "bdev_raid_set_options", 00:16:35.537 "params": { 00:16:35.537 "process_window_size_kb": 1024 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "method": "bdev_iscsi_set_options", 00:16:35.537 "params": { 00:16:35.537 "timeout_sec": 30 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "method": "bdev_nvme_set_options", 00:16:35.537 "params": { 00:16:35.537 "action_on_timeout": "none", 00:16:35.537 "allow_accel_sequence": false, 00:16:35.537 "arbitration_burst": 0, 00:16:35.537 "bdev_retry_count": 3, 00:16:35.537 "ctrlr_loss_timeout_sec": 0, 00:16:35.537 "delay_cmd_submit": true, 00:16:35.537 "dhchap_dhgroups": [ 00:16:35.537 "null", 00:16:35.537 "ffdhe2048", 00:16:35.537 "ffdhe3072", 00:16:35.537 "ffdhe4096", 00:16:35.537 "ffdhe6144", 00:16:35.537 "ffdhe8192" 00:16:35.537 ], 00:16:35.537 "dhchap_digests": [ 00:16:35.537 "sha256", 00:16:35.537 "sha384", 00:16:35.537 "sha512" 00:16:35.537 ], 00:16:35.537 "disable_auto_failback": false, 00:16:35.537 "fast_io_fail_timeout_sec": 0, 00:16:35.537 "generate_uuids": false, 00:16:35.537 "high_priority_weight": 0, 00:16:35.537 "io_path_stat": false, 00:16:35.537 "io_queue_requests": 0, 00:16:35.537 "keep_alive_timeout_ms": 10000, 00:16:35.537 "low_priority_weight": 0, 00:16:35.537 "medium_priority_weight": 0, 00:16:35.537 "nvme_adminq_poll_period_us": 10000, 00:16:35.537 "nvme_error_stat": false, 00:16:35.537 "nvme_ioq_poll_period_us": 0, 00:16:35.537 "rdma_cm_event_timeout_ms": 0, 00:16:35.537 "rdma_max_cq_size": 0, 00:16:35.537 "rdma_srq_size": 0, 00:16:35.537 "reconnect_delay_sec": 0, 00:16:35.537 "timeout_admin_us": 0, 00:16:35.537 "timeout_us": 0, 00:16:35.537 "transport_ack_timeout": 0, 00:16:35.537 "transport_retry_count": 4, 00:16:35.537 "transport_tos": 0 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "method": "bdev_nvme_set_hotplug", 00:16:35.537 "params": { 00:16:35.537 "enable": false, 00:16:35.537 "period_us": 100000 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "method": "bdev_malloc_create", 00:16:35.537 "params": { 00:16:35.537 "block_size": 4096, 00:16:35.537 "name": "malloc0", 00:16:35.537 "num_blocks": 8192, 00:16:35.537 "optimal_io_boundary": 0, 00:16:35.537 "physical_block_size": 4096, 00:16:35.537 "uuid": "10293e06-8eef-4057-a0a0-e858f8b655d3" 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "method": "bdev_wait_for_examine" 00:16:35.537 } 00:16:35.537 ] 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "subsystem": "nbd", 00:16:35.537 "config": [] 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "subsystem": "scheduler", 00:16:35.537 "config": [ 00:16:35.537 { 00:16:35.537 "method": "framework_set_scheduler", 00:16:35.537 "params": { 00:16:35.537 "name": "static" 00:16:35.537 } 00:16:35.537 } 00:16:35.537 ] 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "subsystem": "nvmf", 00:16:35.537 "config": [ 00:16:35.537 { 00:16:35.537 "method": "nvmf_set_config", 00:16:35.537 "params": { 00:16:35.537 "admin_cmd_passthru": { 00:16:35.537 "identify_ctrlr": false 00:16:35.537 }, 00:16:35.537 "discovery_filter": "match_any" 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "method": "nvmf_set_max_subsystems", 00:16:35.537 "params": { 00:16:35.537 "max_subsystems": 1024 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "method": "nvmf_set_crdt", 00:16:35.537 "params": { 00:16:35.537 "crdt1": 0, 00:16:35.537 "crdt2": 0, 00:16:35.537 "crdt3": 0 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.537 "method": "nvmf_create_transport", 00:16:35.537 "params": { 00:16:35.537 "abort_timeout_sec": 1, 00:16:35.537 "ack_timeout": 0, 00:16:35.537 "buf_cache_size": 4294967295, 00:16:35.537 "c2h_success": false, 00:16:35.537 "data_wr_pool_size": 0, 00:16:35.537 "dif_insert_or_strip": false, 00:16:35.537 "in_capsule_data_size": 4096, 00:16:35.537 "io_unit_size": 131072, 00:16:35.537 "max_aq_depth": 128, 00:16:35.537 "max_io_qpairs_per_ctrlr": 127, 00:16:35.537 "max_io_size": 131072, 00:16:35.537 "max_queue_depth": 128, 00:16:35.537 "num_shared_buffers": 511, 00:16:35.537 "sock_priority": 0, 00:16:35.537 "trtype": "TCP", 00:16:35.537 "zcopy": false 00:16:35.537 } 00:16:35.537 }, 00:16:35.537 { 00:16:35.538 "method": "nvmf_create_subsystem", 00:16:35.538 "params": { 00:16:35.538 "allow_any_host": false, 00:16:35.538 "ana_reporting": false, 00:16:35.538 "max_cntlid": 65519, 00:16:35.538 "max_namespaces": 32, 00:16:35.538 "min_cntlid": 1, 00:16:35.538 "model_number": "SPDK bdev Controller", 00:16:35.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.538 "serial_number": "00000000000000000000" 00:16:35.538 } 00:16:35.538 }, 00:16:35.538 { 00:16:35.538 "method": "nvmf_subsystem_add_host", 00:16:35.538 "params": { 00:16:35.538 "host": "nqn.2016-06.io.spdk:host1", 00:16:35.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.538 "psk": "key0" 00:16:35.538 } 00:16:35.538 }, 00:16:35.538 { 00:16:35.538 "method": "nvmf_subsystem_add_ns", 00:16:35.538 "params": { 00:16:35.538 "namespace": { 00:16:35.538 "bdev_name": "malloc0", 00:16:35.538 "nguid": "10293E068EEF4057A0A0E858F8B655D3", 00:16:35.538 "no_auto_visible": false, 00:16:35.538 "nsid": 1, 00:16:35.538 "uuid": "10293e06-8eef-4057-a0a0-e858f8b655d3" 00:16:35.538 }, 00:16:35.538 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:35.538 } 00:16:35.538 }, 00:16:35.538 { 00:16:35.538 "method": "nvmf_subsystem_add_listener", 00:16:35.538 "params": { 00:16:35.538 "listen_address": { 00:16:35.538 "adrfam": "IPv4", 00:16:35.538 "traddr": "10.0.0.2", 00:16:35.538 "trsvcid": "4420", 00:16:35.538 "trtype": "TCP" 00:16:35.538 }, 00:16:35.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.538 "secure_channel": true 00:16:35.538 } 00:16:35.538 } 00:16:35.538 ] 00:16:35.538 } 00:16:35.538 ] 00:16:35.538 }' 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=79045 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 79045 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 79045 ']' 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:35.538 08:56:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.538 [2024-05-15 08:56:51.641728] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:35.538 [2024-05-15 08:56:51.641822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.797 [2024-05-15 08:56:51.778160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.797 [2024-05-15 08:56:51.858838] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.797 [2024-05-15 08:56:51.858917] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.797 [2024-05-15 08:56:51.858935] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.797 [2024-05-15 08:56:51.858948] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.797 [2024-05-15 08:56:51.858959] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.797 [2024-05-15 08:56:51.859065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.055 [2024-05-15 08:56:52.050449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.055 [2024-05-15 08:56:52.082299] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:36.055 [2024-05-15 08:56:52.082385] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:36.055 [2024-05-15 08:56:52.082586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=79089 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 79089 /var/tmp/bdevperf.sock 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 79089 ']' 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:36.623 08:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:16:36.623 "subsystems": [ 00:16:36.623 { 00:16:36.623 "subsystem": "keyring", 00:16:36.623 "config": [ 00:16:36.623 { 00:16:36.623 "method": "keyring_file_add_key", 00:16:36.623 "params": { 00:16:36.623 "name": "key0", 00:16:36.623 "path": "/tmp/tmp.jvg9kNNWpN" 00:16:36.623 } 00:16:36.623 } 00:16:36.623 ] 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "subsystem": "iobuf", 00:16:36.623 "config": [ 00:16:36.623 { 00:16:36.623 "method": "iobuf_set_options", 00:16:36.623 "params": { 00:16:36.623 "large_bufsize": 135168, 00:16:36.623 "large_pool_count": 1024, 00:16:36.623 "small_bufsize": 8192, 00:16:36.623 "small_pool_count": 8192 00:16:36.623 } 00:16:36.623 } 00:16:36.623 ] 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "subsystem": "sock", 00:16:36.623 "config": [ 00:16:36.623 { 00:16:36.623 "method": "sock_set_default_impl", 00:16:36.623 "params": { 00:16:36.623 "impl_name": "posix" 00:16:36.623 } 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "method": "sock_impl_set_options", 00:16:36.623 "params": { 00:16:36.623 "enable_ktls": false, 00:16:36.623 "enable_placement_id": 0, 00:16:36.623 "enable_quickack": false, 00:16:36.623 "enable_recv_pipe": true, 00:16:36.623 "enable_zerocopy_send_client": false, 00:16:36.623 "enable_zerocopy_send_server": true, 00:16:36.623 "impl_name": "ssl", 00:16:36.623 "recv_buf_size": 4096, 00:16:36.623 "send_buf_size": 4096, 00:16:36.623 "tls_version": 0, 00:16:36.623 "zerocopy_threshold": 0 00:16:36.623 } 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "method": "sock_impl_set_options", 00:16:36.623 "params": { 00:16:36.623 "enable_ktls": false, 00:16:36.623 "enable_placement_id": 0, 00:16:36.623 "enable_quickack": false, 00:16:36.623 "enable_recv_pipe": true, 00:16:36.623 "enable_zerocopy_send_client": false, 00:16:36.623 "enable_zerocopy_send_server": true, 00:16:36.623 "impl_name": "posix", 00:16:36.623 "recv_buf_size": 2097152, 00:16:36.623 "send_buf_size": 2097152, 00:16:36.623 "tls_version": 0, 00:16:36.623 "zerocopy_threshold": 0 00:16:36.623 } 00:16:36.623 } 00:16:36.623 ] 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "subsystem": "vmd", 00:16:36.623 "config": [] 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "subsystem": "accel", 00:16:36.623 "config": [ 00:16:36.623 { 00:16:36.623 "method": "accel_set_options", 00:16:36.623 "params": { 00:16:36.623 "buf_count": 2048, 00:16:36.623 "large_cache_size": 16, 00:16:36.623 "sequence_count": 2048, 00:16:36.623 "small_cache_size": 128, 00:16:36.623 "task_count": 2048 00:16:36.623 } 00:16:36.623 } 00:16:36.623 ] 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "subsystem": "bdev", 00:16:36.623 "config": [ 00:16:36.623 { 00:16:36.623 "method": "bdev_set_options", 00:16:36.623 "params": { 00:16:36.623 "bdev_auto_examine": true, 00:16:36.623 "bdev_io_cache_size": 256, 00:16:36.623 "bdev_io_pool_size": 65535, 00:16:36.623 "iobuf_large_cache_size": 16, 00:16:36.623 "iobuf_small_cache_size": 128 00:16:36.623 } 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "method": "bdev_raid_set_options", 00:16:36.623 "params": { 00:16:36.623 "process_window_size_kb": 1024 00:16:36.623 } 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "method": "bdev_iscsi_set_options", 00:16:36.623 "params": { 00:16:36.623 "timeout_sec": 30 00:16:36.623 } 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "method": "bdev_nvme_set_options", 00:16:36.623 "params": { 00:16:36.623 "action_on_timeout": "none", 00:16:36.623 "allow_accel_sequence": false, 00:16:36.623 "arbitration_burst": 0, 00:16:36.623 "bdev_retry_count": 3, 00:16:36.623 "ctrlr_loss_timeout_sec": 0, 00:16:36.623 "delay_cmd_submit": true, 00:16:36.623 "dhchap_dhgroups": [ 00:16:36.623 "null", 00:16:36.623 "ffdhe2048", 00:16:36.623 "ffdhe3072", 00:16:36.623 "ffdhe4096", 00:16:36.623 "ffdhe6144", 00:16:36.623 "ffdhe8192" 00:16:36.623 ], 00:16:36.623 "dhchap_digests": [ 00:16:36.623 "sha256", 00:16:36.623 "sha384", 00:16:36.623 "sha512" 00:16:36.623 ], 00:16:36.623 "disable_auto_failback": false, 00:16:36.623 "fast_io_fail_timeout_sec": 0, 00:16:36.623 "generate_uuids": false, 00:16:36.623 "high_priority_weight": 0, 00:16:36.623 "io_path_stat": false, 00:16:36.623 "io_queue_requests": 512, 00:16:36.623 "keep_alive_timeout_ms": 10000, 00:16:36.623 "low_priority_weight": 0, 00:16:36.623 "medium_priority_weight": 0, 00:16:36.623 "nvme_adminq_poll_period_us": 10000, 00:16:36.623 "nvme_error_stat": false, 00:16:36.623 "nvme_ioq_poll_period_us": 0, 00:16:36.623 "rdma_cm_event_timeout_ms": 0, 00:16:36.623 "rdma_max_cq_size": 0, 00:16:36.623 "rdma_srq_size": 0, 00:16:36.623 "reconnect_delay_sec": 0, 00:16:36.623 "timeout_admin_us": 0, 00:16:36.623 "timeout_us": 0, 00:16:36.623 "transport_ack_timeout": 0, 00:16:36.623 "transport_retry_count": 4, 00:16:36.623 "transport_tos": 0 00:16:36.623 } 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "method": "bdev_nvme_attach_controller", 00:16:36.623 "params": { 00:16:36.623 "adrfam": "IPv4", 00:16:36.623 "ctrlr_loss_timeout_sec": 0, 00:16:36.623 "ddgst": false, 00:16:36.623 "fast_io_fail_timeout_sec": 0, 00:16:36.623 "hdgst": false, 00:16:36.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.623 "name": "nvme0", 00:16:36.623 "prchk_guard": false, 00:16:36.623 "prchk_reftag": false, 00:16:36.623 "psk": "key0", 00:16:36.623 "reconnect_delay_sec": 0, 00:16:36.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.623 "traddr": "10.0.0.2", 00:16:36.623 "trsvcid": "4420", 00:16:36.623 "trtype": "TCP" 00:16:36.623 } 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "method": "bdev_nvme_set_hotplug", 00:16:36.623 "params": { 00:16:36.623 "enable": false, 00:16:36.623 "period_us": 100000 00:16:36.623 } 00:16:36.623 }, 00:16:36.623 { 00:16:36.623 "method": "bdev_enable_histogram", 00:16:36.624 "params": { 00:16:36.624 "enable": true, 00:16:36.624 "name": "nvme0n1" 00:16:36.624 } 00:16:36.624 }, 00:16:36.624 { 00:16:36.624 "method": "bdev_wait_for_examine" 00:16:36.624 } 00:16:36.624 ] 00:16:36.624 }, 00:16:36.624 { 00:16:36.624 "subsystem": "nbd", 00:16:36.624 "config": [] 00:16:36.624 } 00:16:36.624 ] 00:16:36.624 }' 00:16:36.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:36.624 08:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:36.624 08:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:36.624 08:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.624 [2024-05-15 08:56:52.747414] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:36.624 [2024-05-15 08:56:52.747540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79089 ] 00:16:36.882 [2024-05-15 08:56:52.893878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.882 [2024-05-15 08:56:52.977459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.140 [2024-05-15 08:56:53.117028] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:37.706 08:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:37.706 08:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:37.706 08:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:37.706 08:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:16:37.965 08:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.965 08:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:37.965 Running I/O for 1 seconds... 00:16:39.341 00:16:39.341 Latency(us) 00:16:39.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.341 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:39.341 Verification LBA range: start 0x0 length 0x2000 00:16:39.341 nvme0n1 : 1.04 3718.97 14.53 0.00 0.00 33768.99 7268.54 26810.18 00:16:39.341 =================================================================================================================== 00:16:39.341 Total : 3718.97 14.53 0.00 0.00 33768.99 7268.54 26810.18 00:16:39.341 0 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:39.341 nvmf_trace.0 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 79089 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 79089 ']' 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 79089 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79089 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:39.341 killing process with pid 79089 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79089' 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 79089 00:16:39.341 Received shutdown signal, test time was about 1.000000 seconds 00:16:39.341 00:16:39.341 Latency(us) 00:16:39.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.341 =================================================================================================================== 00:16:39.341 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 79089 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:39.341 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:39.341 rmmod nvme_tcp 00:16:39.341 rmmod nvme_fabrics 00:16:39.600 rmmod nvme_keyring 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 79045 ']' 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 79045 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 79045 ']' 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 79045 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79045 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:39.600 killing process with pid 79045 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79045' 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 79045 00:16:39.600 [2024-05-15 08:56:55.644542] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:39.600 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 79045 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.i1vnhu9Pje /tmp/tmp.TjYcRKNyMu /tmp/tmp.jvg9kNNWpN 00:16:39.860 00:16:39.860 real 1m23.373s 00:16:39.860 user 2m12.586s 00:16:39.860 sys 0m26.729s 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:39.860 08:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.860 ************************************ 00:16:39.860 END TEST nvmf_tls 00:16:39.860 ************************************ 00:16:39.860 08:56:55 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:39.860 08:56:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:39.860 08:56:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:39.860 08:56:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:39.860 ************************************ 00:16:39.860 START TEST nvmf_fips 00:16:39.860 ************************************ 00:16:39.860 08:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:39.860 * Looking for test storage... 00:16:39.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:16:39.860 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:39.861 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:16:40.121 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:16:40.122 Error setting digest 00:16:40.122 0052500EA47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:40.122 0052500EA47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:40.122 Cannot find device "nvmf_tgt_br" 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:40.122 Cannot find device "nvmf_tgt_br2" 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:40.122 Cannot find device "nvmf_tgt_br" 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:40.122 Cannot find device "nvmf_tgt_br2" 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:40.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:40.122 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:40.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.389 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:40.389 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:40.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:16:40.390 00:16:40.390 --- 10.0.0.2 ping statistics --- 00:16:40.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.390 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:40.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:40.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:16:40.390 00:16:40.390 --- 10.0.0.3 ping statistics --- 00:16:40.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.390 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:40.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:40.390 00:16:40.390 --- 10.0.0.1 ping statistics --- 00:16:40.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.390 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=79369 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 79369 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 79369 ']' 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:40.390 08:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:40.649 [2024-05-15 08:56:56.683492] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:40.649 [2024-05-15 08:56:56.683616] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.649 [2024-05-15 08:56:56.820662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.908 [2024-05-15 08:56:56.892229] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.908 [2024-05-15 08:56:56.892308] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.908 [2024-05-15 08:56:56.892331] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.908 [2024-05-15 08:56:56.892346] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.908 [2024-05-15 08:56:56.892359] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.908 [2024-05-15 08:56:56.892406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:41.475 08:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.734 [2024-05-15 08:56:57.961215] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.993 [2024-05-15 08:56:57.977129] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:41.993 [2024-05-15 08:56:57.977210] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:41.993 [2024-05-15 08:56:57.977390] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.993 [2024-05-15 08:56:58.003740] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:41.993 malloc0 00:16:41.993 08:56:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:41.993 08:56:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=79425 00:16:41.993 08:56:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:41.993 08:56:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 79425 /var/tmp/bdevperf.sock 00:16:41.993 08:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 79425 ']' 00:16:41.993 08:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.993 08:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:41.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.993 08:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.993 08:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:41.993 08:56:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:41.993 [2024-05-15 08:56:58.106112] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:41.993 [2024-05-15 08:56:58.106215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79425 ] 00:16:42.251 [2024-05-15 08:56:58.244333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.251 [2024-05-15 08:56:58.322278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.214 08:56:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:43.214 08:56:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:16:43.214 08:56:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:43.214 [2024-05-15 08:56:59.361472] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:43.214 [2024-05-15 08:56:59.361601] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:43.214 TLSTESTn1 00:16:43.472 08:56:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:43.472 Running I/O for 10 seconds... 00:16:53.496 00:16:53.496 Latency(us) 00:16:53.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.496 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:53.496 Verification LBA range: start 0x0 length 0x2000 00:16:53.496 TLSTESTn1 : 10.02 3828.33 14.95 0.00 0.00 33366.71 7685.59 29789.09 00:16:53.496 =================================================================================================================== 00:16:53.496 Total : 3828.33 14.95 0.00 0.00 33366.71 7685.59 29789.09 00:16:53.496 0 00:16:53.496 08:57:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:53.496 08:57:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:53.496 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:16:53.496 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:53.497 nvmf_trace.0 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 79425 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 79425 ']' 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 79425 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79425 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:53.497 killing process with pid 79425 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79425' 00:16:53.497 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.497 00:16:53.497 Latency(us) 00:16:53.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.497 =================================================================================================================== 00:16:53.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 79425 00:16:53.497 [2024-05-15 08:57:09.725689] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:53.497 08:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 79425 00:16:53.755 08:57:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:53.755 08:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:53.755 08:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:16:53.755 08:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.755 08:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:16:53.755 08:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.755 08:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.755 rmmod nvme_tcp 00:16:53.755 rmmod nvme_fabrics 00:16:54.016 rmmod nvme_keyring 00:16:54.016 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.016 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:16:54.016 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:16:54.016 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 79369 ']' 00:16:54.016 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 79369 00:16:54.016 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 79369 ']' 00:16:54.016 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 79369 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79369 00:16:54.017 killing process with pid 79369 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79369' 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 79369 00:16:54.017 [2024-05-15 08:57:10.039696] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:54.017 [2024-05-15 08:57:10.039745] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 79369 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.017 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.300 08:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:54.300 08:57:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:54.300 ************************************ 00:16:54.300 END TEST nvmf_fips 00:16:54.300 ************************************ 00:16:54.300 00:16:54.300 real 0m14.351s 00:16:54.300 user 0m19.845s 00:16:54.300 sys 0m5.572s 00:16:54.300 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:54.300 08:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:54.300 08:57:10 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:16:54.300 08:57:10 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:16:54.300 08:57:10 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:16:54.300 08:57:10 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:54.300 08:57:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.300 08:57:10 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:16:54.300 08:57:10 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:54.300 08:57:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.300 08:57:10 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:16:54.300 08:57:10 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:54.300 08:57:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:54.300 08:57:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:54.300 08:57:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.300 ************************************ 00:16:54.300 START TEST nvmf_multicontroller 00:16:54.300 ************************************ 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:54.300 * Looking for test storage... 00:16:54.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:16:54.300 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:54.301 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:54.302 Cannot find device "nvmf_tgt_br" 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:54.302 Cannot find device "nvmf_tgt_br2" 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:54.302 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:54.561 Cannot find device "nvmf_tgt_br" 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:54.561 Cannot find device "nvmf_tgt_br2" 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:54.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:54.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:54.561 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:54.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:16:54.820 00:16:54.820 --- 10.0.0.2 ping statistics --- 00:16:54.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.820 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:54.820 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:54.820 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:16:54.820 00:16:54.820 --- 10.0.0.3 ping statistics --- 00:16:54.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.820 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:54.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:16:54.820 00:16:54.820 --- 10.0.0.1 ping statistics --- 00:16:54.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.820 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=79789 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 79789 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 79789 ']' 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:54.820 08:57:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:54.820 [2024-05-15 08:57:10.919819] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:54.820 [2024-05-15 08:57:10.919913] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.080 [2024-05-15 08:57:11.061271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:55.080 [2024-05-15 08:57:11.133523] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.080 [2024-05-15 08:57:11.133602] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.080 [2024-05-15 08:57:11.133617] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.080 [2024-05-15 08:57:11.133627] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.080 [2024-05-15 08:57:11.133636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.080 [2024-05-15 08:57:11.134203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.080 [2024-05-15 08:57:11.134297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.080 [2024-05-15 08:57:11.134315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 [2024-05-15 08:57:11.940389] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 Malloc0 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.018 08:57:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 [2024-05-15 08:57:11.997508] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:56.018 [2024-05-15 08:57:11.997786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 [2024-05-15 08:57:12.005657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 Malloc1 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=79841 00:16:56.018 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.019 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:56.019 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 79841 /var/tmp/bdevperf.sock 00:16:56.019 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 79841 ']' 00:16:56.019 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.019 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.019 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.019 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.019 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.278 NVMe0n1 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.278 1 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.278 2024/05/15 08:57:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:56.278 request: 00:16:56.278 { 00:16:56.278 "method": "bdev_nvme_attach_controller", 00:16:56.278 "params": { 00:16:56.278 "name": "NVMe0", 00:16:56.278 "trtype": "tcp", 00:16:56.278 "traddr": "10.0.0.2", 00:16:56.278 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:56.278 "hostaddr": "10.0.0.2", 00:16:56.278 "hostsvcid": "60000", 00:16:56.278 "adrfam": "ipv4", 00:16:56.278 "trsvcid": "4420", 00:16:56.278 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:16:56.278 } 00:16:56.278 } 00:16:56.278 Got JSON-RPC error response 00:16:56.278 GoRPCClient: error on JSON-RPC call 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.278 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.538 2024/05/15 08:57:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:56.538 request: 00:16:56.538 { 00:16:56.538 "method": "bdev_nvme_attach_controller", 00:16:56.538 "params": { 00:16:56.538 "name": "NVMe0", 00:16:56.538 "trtype": "tcp", 00:16:56.538 "traddr": "10.0.0.2", 00:16:56.538 "hostaddr": "10.0.0.2", 00:16:56.538 "hostsvcid": "60000", 00:16:56.538 "adrfam": "ipv4", 00:16:56.538 "trsvcid": "4420", 00:16:56.538 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:16:56.538 } 00:16:56.538 } 00:16:56.538 Got JSON-RPC error response 00:16:56.538 GoRPCClient: error on JSON-RPC call 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.538 2024/05/15 08:57:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:16:56.538 request: 00:16:56.538 { 00:16:56.538 "method": "bdev_nvme_attach_controller", 00:16:56.538 "params": { 00:16:56.538 "name": "NVMe0", 00:16:56.538 "trtype": "tcp", 00:16:56.538 "traddr": "10.0.0.2", 00:16:56.538 "hostaddr": "10.0.0.2", 00:16:56.538 "hostsvcid": "60000", 00:16:56.538 "adrfam": "ipv4", 00:16:56.538 "trsvcid": "4420", 00:16:56.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.538 "multipath": "disable" 00:16:56.538 } 00:16:56.538 } 00:16:56.538 Got JSON-RPC error response 00:16:56.538 GoRPCClient: error on JSON-RPC call 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:56.538 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.539 2024/05/15 08:57:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:56.539 request: 00:16:56.539 { 00:16:56.539 "method": "bdev_nvme_attach_controller", 00:16:56.539 "params": { 00:16:56.539 "name": "NVMe0", 00:16:56.539 "trtype": "tcp", 00:16:56.539 "traddr": "10.0.0.2", 00:16:56.539 "hostaddr": "10.0.0.2", 00:16:56.539 "hostsvcid": "60000", 00:16:56.539 "adrfam": "ipv4", 00:16:56.539 "trsvcid": "4420", 00:16:56.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.539 "multipath": "failover" 00:16:56.539 } 00:16:56.539 } 00:16:56.539 Got JSON-RPC error response 00:16:56.539 GoRPCClient: error on JSON-RPC call 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.539 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.539 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:56.539 08:57:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.933 0 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 79841 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 79841 ']' 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 79841 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79841 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:57.933 killing process with pid 79841 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79841' 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 79841 00:16:57.933 08:57:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 79841 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:16:57.933 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:57.933 [2024-05-15 08:57:12.110946] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:57.933 [2024-05-15 08:57:12.111065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79841 ] 00:16:57.933 [2024-05-15 08:57:12.251140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.933 [2024-05-15 08:57:12.319219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.933 [2024-05-15 08:57:12.694872] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 2e8fe8b0-5ff4-4d8d-ba68-c8a1fd126bde already exists 00:16:57.933 [2024-05-15 08:57:12.694931] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:2e8fe8b0-5ff4-4d8d-ba68-c8a1fd126bde alias for bdev NVMe1n1 00:16:57.933 [2024-05-15 08:57:12.694953] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:57.933 Running I/O for 1 seconds... 00:16:57.933 00:16:57.933 Latency(us) 00:16:57.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.933 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:57.933 NVMe0n1 : 1.00 18894.76 73.81 0.00 0.00 6757.85 2070.34 13702.98 00:16:57.933 =================================================================================================================== 00:16:57.933 Total : 18894.76 73.81 0.00 0.00 6757.85 2070.34 13702.98 00:16:57.933 Received shutdown signal, test time was about 1.000000 seconds 00:16:57.933 00:16:57.933 Latency(us) 00:16:57.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.933 =================================================================================================================== 00:16:57.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.933 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.933 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.238 rmmod nvme_tcp 00:16:58.238 rmmod nvme_fabrics 00:16:58.238 rmmod nvme_keyring 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 79789 ']' 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 79789 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 79789 ']' 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 79789 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79789 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79789' 00:16:58.238 killing process with pid 79789 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 79789 00:16:58.238 [2024-05-15 08:57:14.243107] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 79789 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.238 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.239 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.239 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.239 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.239 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.498 08:57:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:58.498 00:16:58.498 real 0m4.122s 00:16:58.498 user 0m12.166s 00:16:58.498 sys 0m0.981s 00:16:58.498 ************************************ 00:16:58.498 END TEST nvmf_multicontroller 00:16:58.498 ************************************ 00:16:58.498 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:58.498 08:57:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:58.498 08:57:14 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:58.498 08:57:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:58.498 08:57:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:58.498 08:57:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:58.498 ************************************ 00:16:58.498 START TEST nvmf_aer 00:16:58.498 ************************************ 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:58.498 * Looking for test storage... 00:16:58.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:58.498 Cannot find device "nvmf_tgt_br" 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:58.498 Cannot find device "nvmf_tgt_br2" 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:58.498 Cannot find device "nvmf_tgt_br" 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:58.498 Cannot find device "nvmf_tgt_br2" 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:16:58.498 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:58.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:58.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:58.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:58.757 00:16:58.757 --- 10.0.0.2 ping statistics --- 00:16:58.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.757 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:58.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:58.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:16:58.757 00:16:58.757 --- 10.0.0.3 ping statistics --- 00:16:58.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.757 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:58.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:16:58.757 00:16:58.757 --- 10.0.0.1 ping statistics --- 00:16:58.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.757 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:58.757 08:57:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.015 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=80078 00:16:59.015 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 80078 00:16:59.015 08:57:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:59.015 08:57:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 80078 ']' 00:16:59.015 08:57:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.016 08:57:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:59.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.016 08:57:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.016 08:57:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:59.016 08:57:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.016 [2024-05-15 08:57:15.050240] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:59.016 [2024-05-15 08:57:15.050339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.016 [2024-05-15 08:57:15.188962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.016 [2024-05-15 08:57:15.248430] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.016 [2024-05-15 08:57:15.248488] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.016 [2024-05-15 08:57:15.248508] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.016 [2024-05-15 08:57:15.248521] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.016 [2024-05-15 08:57:15.248532] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.274 [2024-05-15 08:57:15.249197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.274 [2024-05-15 08:57:15.249303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.274 [2024-05-15 08:57:15.249394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.274 [2024-05-15 08:57:15.249407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.840 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:59.840 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:16:59.840 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.840 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.840 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.840 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.840 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.840 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.840 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.840 [2024-05-15 08:57:16.064861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.099 Malloc0 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.099 [2024-05-15 08:57:16.121713] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:00.099 [2024-05-15 08:57:16.121989] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.099 [ 00:17:00.099 { 00:17:00.099 "allow_any_host": true, 00:17:00.099 "hosts": [], 00:17:00.099 "listen_addresses": [], 00:17:00.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:00.099 "subtype": "Discovery" 00:17:00.099 }, 00:17:00.099 { 00:17:00.099 "allow_any_host": true, 00:17:00.099 "hosts": [], 00:17:00.099 "listen_addresses": [ 00:17:00.099 { 00:17:00.099 "adrfam": "IPv4", 00:17:00.099 "traddr": "10.0.0.2", 00:17:00.099 "trsvcid": "4420", 00:17:00.099 "trtype": "TCP" 00:17:00.099 } 00:17:00.099 ], 00:17:00.099 "max_cntlid": 65519, 00:17:00.099 "max_namespaces": 2, 00:17:00.099 "min_cntlid": 1, 00:17:00.099 "model_number": "SPDK bdev Controller", 00:17:00.099 "namespaces": [ 00:17:00.099 { 00:17:00.099 "bdev_name": "Malloc0", 00:17:00.099 "name": "Malloc0", 00:17:00.099 "nguid": "E82EC9DEBF9945B594E3B1C63FBCBF04", 00:17:00.099 "nsid": 1, 00:17:00.099 "uuid": "e82ec9de-bf99-45b5-94e3-b1c63fbcbf04" 00:17:00.099 } 00:17:00.099 ], 00:17:00.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.099 "serial_number": "SPDK00000000000001", 00:17:00.099 "subtype": "NVMe" 00:17:00.099 } 00:17:00.099 ] 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=80132 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:17:00.099 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.358 Malloc1 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.358 Asynchronous Event Request test 00:17:00.358 Attaching to 10.0.0.2 00:17:00.358 Attached to 10.0.0.2 00:17:00.358 Registering asynchronous event callbacks... 00:17:00.358 Starting namespace attribute notice tests for all controllers... 00:17:00.358 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:00.358 aer_cb - Changed Namespace 00:17:00.358 Cleaning up... 00:17:00.358 [ 00:17:00.358 { 00:17:00.358 "allow_any_host": true, 00:17:00.358 "hosts": [], 00:17:00.358 "listen_addresses": [], 00:17:00.358 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:00.358 "subtype": "Discovery" 00:17:00.358 }, 00:17:00.358 { 00:17:00.358 "allow_any_host": true, 00:17:00.358 "hosts": [], 00:17:00.358 "listen_addresses": [ 00:17:00.358 { 00:17:00.358 "adrfam": "IPv4", 00:17:00.358 "traddr": "10.0.0.2", 00:17:00.358 "trsvcid": "4420", 00:17:00.358 "trtype": "TCP" 00:17:00.358 } 00:17:00.358 ], 00:17:00.358 "max_cntlid": 65519, 00:17:00.358 "max_namespaces": 2, 00:17:00.358 "min_cntlid": 1, 00:17:00.358 "model_number": "SPDK bdev Controller", 00:17:00.358 "namespaces": [ 00:17:00.358 { 00:17:00.358 "bdev_name": "Malloc0", 00:17:00.358 "name": "Malloc0", 00:17:00.358 "nguid": "E82EC9DEBF9945B594E3B1C63FBCBF04", 00:17:00.358 "nsid": 1, 00:17:00.358 "uuid": "e82ec9de-bf99-45b5-94e3-b1c63fbcbf04" 00:17:00.358 }, 00:17:00.358 { 00:17:00.358 "bdev_name": "Malloc1", 00:17:00.358 "name": "Malloc1", 00:17:00.358 "nguid": "7ABD150CDAFE435C899649C7C034D181", 00:17:00.358 "nsid": 2, 00:17:00.358 "uuid": "7abd150c-dafe-435c-8996-49c7c034d181" 00:17:00.358 } 00:17:00.358 ], 00:17:00.358 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.358 "serial_number": "SPDK00000000000001", 00:17:00.358 "subtype": "NVMe" 00:17:00.358 } 00:17:00.358 ] 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 80132 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.358 rmmod nvme_tcp 00:17:00.358 rmmod nvme_fabrics 00:17:00.358 rmmod nvme_keyring 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 80078 ']' 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 80078 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 80078 ']' 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 80078 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80078 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:00.358 killing process with pid 80078 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80078' 00:17:00.358 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 80078 00:17:00.358 [2024-05-15 08:57:16.590648] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:00.359 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 80078 00:17:00.617 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:00.617 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:00.617 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:00.617 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.617 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:00.617 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.617 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.617 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.617 08:57:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:00.617 00:17:00.617 real 0m2.290s 00:17:00.617 user 0m6.294s 00:17:00.617 sys 0m0.598s 00:17:00.617 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:00.617 08:57:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.617 ************************************ 00:17:00.617 END TEST nvmf_aer 00:17:00.617 ************************************ 00:17:00.876 08:57:16 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:00.876 08:57:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:00.876 08:57:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:00.876 08:57:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.876 ************************************ 00:17:00.876 START TEST nvmf_async_init 00:17:00.876 ************************************ 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:00.876 * Looking for test storage... 00:17:00.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:00.876 08:57:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8bf38e89c17e42408fd240883dde4daa 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:00.877 08:57:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:00.877 Cannot find device "nvmf_tgt_br" 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.877 Cannot find device "nvmf_tgt_br2" 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:00.877 Cannot find device "nvmf_tgt_br" 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:00.877 Cannot find device "nvmf_tgt_br2" 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:00.877 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:01.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:17:01.136 00:17:01.136 --- 10.0.0.2 ping statistics --- 00:17:01.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.136 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:01.136 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:01.136 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:01.136 00:17:01.136 --- 10.0.0.3 ping statistics --- 00:17:01.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.136 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:01.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:01.136 00:17:01.136 --- 10.0.0.1 ping statistics --- 00:17:01.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.136 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=80299 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 80299 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 80299 ']' 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:01.136 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.137 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:01.137 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.395 [2024-05-15 08:57:17.401458] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:01.395 [2024-05-15 08:57:17.401551] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.395 [2024-05-15 08:57:17.536200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.395 [2024-05-15 08:57:17.607597] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.395 [2024-05-15 08:57:17.607652] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.395 [2024-05-15 08:57:17.607665] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.395 [2024-05-15 08:57:17.607675] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.395 [2024-05-15 08:57:17.607684] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.395 [2024-05-15 08:57:17.607715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.654 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:01.654 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:17:01.654 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:01.654 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.654 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.654 08:57:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.654 08:57:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:01.654 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.654 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.654 [2024-05-15 08:57:17.737659] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.654 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.654 08:57:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.655 null0 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8bf38e89c17e42408fd240883dde4daa 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.655 [2024-05-15 08:57:17.777577] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:01.655 [2024-05-15 08:57:17.777789] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.655 08:57:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.914 nvme0n1 00:17:01.914 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.914 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:01.914 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.914 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.914 [ 00:17:01.914 { 00:17:01.914 "aliases": [ 00:17:01.914 "8bf38e89-c17e-4240-8fd2-40883dde4daa" 00:17:01.914 ], 00:17:01.914 "assigned_rate_limits": { 00:17:01.914 "r_mbytes_per_sec": 0, 00:17:01.914 "rw_ios_per_sec": 0, 00:17:01.914 "rw_mbytes_per_sec": 0, 00:17:01.914 "w_mbytes_per_sec": 0 00:17:01.914 }, 00:17:01.914 "block_size": 512, 00:17:01.914 "claimed": false, 00:17:01.914 "driver_specific": { 00:17:01.914 "mp_policy": "active_passive", 00:17:01.914 "nvme": [ 00:17:01.914 { 00:17:01.914 "ctrlr_data": { 00:17:01.914 "ana_reporting": false, 00:17:01.914 "cntlid": 1, 00:17:01.914 "firmware_revision": "24.05", 00:17:01.914 "model_number": "SPDK bdev Controller", 00:17:01.914 "multi_ctrlr": true, 00:17:01.914 "oacs": { 00:17:01.914 "firmware": 0, 00:17:01.914 "format": 0, 00:17:01.914 "ns_manage": 0, 00:17:01.914 "security": 0 00:17:01.914 }, 00:17:01.914 "serial_number": "00000000000000000000", 00:17:01.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.914 "vendor_id": "0x8086" 00:17:01.914 }, 00:17:01.914 "ns_data": { 00:17:01.914 "can_share": true, 00:17:01.914 "id": 1 00:17:01.914 }, 00:17:01.914 "trid": { 00:17:01.914 "adrfam": "IPv4", 00:17:01.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.914 "traddr": "10.0.0.2", 00:17:01.914 "trsvcid": "4420", 00:17:01.914 "trtype": "TCP" 00:17:01.914 }, 00:17:01.914 "vs": { 00:17:01.914 "nvme_version": "1.3" 00:17:01.914 } 00:17:01.914 } 00:17:01.914 ] 00:17:01.914 }, 00:17:01.914 "memory_domains": [ 00:17:01.914 { 00:17:01.914 "dma_device_id": "system", 00:17:01.914 "dma_device_type": 1 00:17:01.914 } 00:17:01.914 ], 00:17:01.914 "name": "nvme0n1", 00:17:01.914 "num_blocks": 2097152, 00:17:01.914 "product_name": "NVMe disk", 00:17:01.914 "supported_io_types": { 00:17:01.914 "abort": true, 00:17:01.914 "compare": true, 00:17:01.914 "compare_and_write": true, 00:17:01.914 "flush": true, 00:17:01.914 "nvme_admin": true, 00:17:01.914 "nvme_io": true, 00:17:01.914 "read": true, 00:17:01.914 "reset": true, 00:17:01.914 "unmap": false, 00:17:01.914 "write": true, 00:17:01.914 "write_zeroes": true 00:17:01.914 }, 00:17:01.914 "uuid": "8bf38e89-c17e-4240-8fd2-40883dde4daa", 00:17:01.914 "zoned": false 00:17:01.914 } 00:17:01.914 ] 00:17:01.914 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.914 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:01.914 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.914 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.914 [2024-05-15 08:57:18.037763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:01.914 [2024-05-15 08:57:18.037897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201aeb0 (9): Bad file descriptor 00:17:02.174 [2024-05-15 08:57:18.169821] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:02.174 [ 00:17:02.174 { 00:17:02.174 "aliases": [ 00:17:02.174 "8bf38e89-c17e-4240-8fd2-40883dde4daa" 00:17:02.174 ], 00:17:02.174 "assigned_rate_limits": { 00:17:02.174 "r_mbytes_per_sec": 0, 00:17:02.174 "rw_ios_per_sec": 0, 00:17:02.174 "rw_mbytes_per_sec": 0, 00:17:02.174 "w_mbytes_per_sec": 0 00:17:02.174 }, 00:17:02.174 "block_size": 512, 00:17:02.174 "claimed": false, 00:17:02.174 "driver_specific": { 00:17:02.174 "mp_policy": "active_passive", 00:17:02.174 "nvme": [ 00:17:02.174 { 00:17:02.174 "ctrlr_data": { 00:17:02.174 "ana_reporting": false, 00:17:02.174 "cntlid": 2, 00:17:02.174 "firmware_revision": "24.05", 00:17:02.174 "model_number": "SPDK bdev Controller", 00:17:02.174 "multi_ctrlr": true, 00:17:02.174 "oacs": { 00:17:02.174 "firmware": 0, 00:17:02.174 "format": 0, 00:17:02.174 "ns_manage": 0, 00:17:02.174 "security": 0 00:17:02.174 }, 00:17:02.174 "serial_number": "00000000000000000000", 00:17:02.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:02.174 "vendor_id": "0x8086" 00:17:02.174 }, 00:17:02.174 "ns_data": { 00:17:02.174 "can_share": true, 00:17:02.174 "id": 1 00:17:02.174 }, 00:17:02.174 "trid": { 00:17:02.174 "adrfam": "IPv4", 00:17:02.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:02.174 "traddr": "10.0.0.2", 00:17:02.174 "trsvcid": "4420", 00:17:02.174 "trtype": "TCP" 00:17:02.174 }, 00:17:02.174 "vs": { 00:17:02.174 "nvme_version": "1.3" 00:17:02.174 } 00:17:02.174 } 00:17:02.174 ] 00:17:02.174 }, 00:17:02.174 "memory_domains": [ 00:17:02.174 { 00:17:02.174 "dma_device_id": "system", 00:17:02.174 "dma_device_type": 1 00:17:02.174 } 00:17:02.174 ], 00:17:02.174 "name": "nvme0n1", 00:17:02.174 "num_blocks": 2097152, 00:17:02.174 "product_name": "NVMe disk", 00:17:02.174 "supported_io_types": { 00:17:02.174 "abort": true, 00:17:02.174 "compare": true, 00:17:02.174 "compare_and_write": true, 00:17:02.174 "flush": true, 00:17:02.174 "nvme_admin": true, 00:17:02.174 "nvme_io": true, 00:17:02.174 "read": true, 00:17:02.174 "reset": true, 00:17:02.174 "unmap": false, 00:17:02.174 "write": true, 00:17:02.174 "write_zeroes": true 00:17:02.174 }, 00:17:02.174 "uuid": "8bf38e89-c17e-4240-8fd2-40883dde4daa", 00:17:02.174 "zoned": false 00:17:02.174 } 00:17:02.174 ] 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.sMHtqLTPZJ 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.sMHtqLTPZJ 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:02.174 [2024-05-15 08:57:18.229888] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:02.174 [2024-05-15 08:57:18.230055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sMHtqLTPZJ 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:02.174 [2024-05-15 08:57:18.237886] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sMHtqLTPZJ 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:02.174 [2024-05-15 08:57:18.249928] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:02.174 [2024-05-15 08:57:18.250046] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:02.174 nvme0n1 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:02.174 [ 00:17:02.174 { 00:17:02.174 "aliases": [ 00:17:02.174 "8bf38e89-c17e-4240-8fd2-40883dde4daa" 00:17:02.174 ], 00:17:02.174 "assigned_rate_limits": { 00:17:02.174 "r_mbytes_per_sec": 0, 00:17:02.174 "rw_ios_per_sec": 0, 00:17:02.174 "rw_mbytes_per_sec": 0, 00:17:02.174 "w_mbytes_per_sec": 0 00:17:02.174 }, 00:17:02.174 "block_size": 512, 00:17:02.174 "claimed": false, 00:17:02.174 "driver_specific": { 00:17:02.174 "mp_policy": "active_passive", 00:17:02.174 "nvme": [ 00:17:02.174 { 00:17:02.174 "ctrlr_data": { 00:17:02.174 "ana_reporting": false, 00:17:02.174 "cntlid": 3, 00:17:02.174 "firmware_revision": "24.05", 00:17:02.174 "model_number": "SPDK bdev Controller", 00:17:02.174 "multi_ctrlr": true, 00:17:02.174 "oacs": { 00:17:02.174 "firmware": 0, 00:17:02.174 "format": 0, 00:17:02.174 "ns_manage": 0, 00:17:02.174 "security": 0 00:17:02.174 }, 00:17:02.174 "serial_number": "00000000000000000000", 00:17:02.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:02.174 "vendor_id": "0x8086" 00:17:02.174 }, 00:17:02.174 "ns_data": { 00:17:02.174 "can_share": true, 00:17:02.174 "id": 1 00:17:02.174 }, 00:17:02.174 "trid": { 00:17:02.174 "adrfam": "IPv4", 00:17:02.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:02.174 "traddr": "10.0.0.2", 00:17:02.174 "trsvcid": "4421", 00:17:02.174 "trtype": "TCP" 00:17:02.174 }, 00:17:02.174 "vs": { 00:17:02.174 "nvme_version": "1.3" 00:17:02.174 } 00:17:02.174 } 00:17:02.174 ] 00:17:02.174 }, 00:17:02.174 "memory_domains": [ 00:17:02.174 { 00:17:02.174 "dma_device_id": "system", 00:17:02.174 "dma_device_type": 1 00:17:02.174 } 00:17:02.174 ], 00:17:02.174 "name": "nvme0n1", 00:17:02.174 "num_blocks": 2097152, 00:17:02.174 "product_name": "NVMe disk", 00:17:02.174 "supported_io_types": { 00:17:02.174 "abort": true, 00:17:02.174 "compare": true, 00:17:02.174 "compare_and_write": true, 00:17:02.174 "flush": true, 00:17:02.174 "nvme_admin": true, 00:17:02.174 "nvme_io": true, 00:17:02.174 "read": true, 00:17:02.174 "reset": true, 00:17:02.174 "unmap": false, 00:17:02.174 "write": true, 00:17:02.174 "write_zeroes": true 00:17:02.174 }, 00:17:02.174 "uuid": "8bf38e89-c17e-4240-8fd2-40883dde4daa", 00:17:02.174 "zoned": false 00:17:02.174 } 00:17:02.174 ] 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.sMHtqLTPZJ 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:17:02.174 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:02.433 rmmod nvme_tcp 00:17:02.433 rmmod nvme_fabrics 00:17:02.433 rmmod nvme_keyring 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 80299 ']' 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 80299 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 80299 ']' 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 80299 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80299 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:02.433 killing process with pid 80299 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80299' 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 80299 00:17:02.433 [2024-05-15 08:57:18.488230] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:02.433 [2024-05-15 08:57:18.488270] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:02.433 [2024-05-15 08:57:18.488282] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 80299 00:17:02.433 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:02.691 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:02.691 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:02.691 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:02.691 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:02.691 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.691 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.691 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.691 08:57:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:02.691 00:17:02.691 real 0m1.819s 00:17:02.691 user 0m1.516s 00:17:02.691 sys 0m0.513s 00:17:02.691 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:02.691 ************************************ 00:17:02.691 END TEST nvmf_async_init 00:17:02.691 ************************************ 00:17:02.691 08:57:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:02.691 08:57:18 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:02.691 08:57:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:02.691 08:57:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:02.691 08:57:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:02.691 ************************************ 00:17:02.691 START TEST dma 00:17:02.691 ************************************ 00:17:02.691 08:57:18 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:02.691 * Looking for test storage... 00:17:02.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:02.691 08:57:18 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.691 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.691 08:57:18 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.691 08:57:18 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.691 08:57:18 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.691 08:57:18 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.692 08:57:18 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.692 08:57:18 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.692 08:57:18 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:17:02.692 08:57:18 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.692 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:17:02.692 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.692 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.692 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.692 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.692 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.692 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.692 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.692 08:57:18 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.692 08:57:18 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:02.692 08:57:18 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:17:02.692 00:17:02.692 real 0m0.091s 00:17:02.692 user 0m0.046s 00:17:02.692 sys 0m0.049s 00:17:02.692 08:57:18 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:02.692 08:57:18 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:17:02.692 ************************************ 00:17:02.692 END TEST dma 00:17:02.692 ************************************ 00:17:02.692 08:57:18 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:02.692 08:57:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:02.692 08:57:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:02.692 08:57:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:02.692 ************************************ 00:17:02.692 START TEST nvmf_identify 00:17:02.692 ************************************ 00:17:02.692 08:57:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:02.949 * Looking for test storage... 00:17:02.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:02.949 08:57:18 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:02.950 08:57:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:02.950 Cannot find device "nvmf_tgt_br" 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:02.950 Cannot find device "nvmf_tgt_br2" 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:02.950 Cannot find device "nvmf_tgt_br" 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:02.950 Cannot find device "nvmf_tgt_br2" 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.950 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:03.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:17:03.209 00:17:03.209 --- 10.0.0.2 ping statistics --- 00:17:03.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.209 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:03.209 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.209 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:17:03.209 00:17:03.209 --- 10.0.0.3 ping statistics --- 00:17:03.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.209 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:03.209 00:17:03.209 --- 10.0.0.1 ping statistics --- 00:17:03.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.209 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.209 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=80551 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 80551 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 80551 ']' 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:03.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:03.210 08:57:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.468 [2024-05-15 08:57:19.444656] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:03.468 [2024-05-15 08:57:19.444778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.468 [2024-05-15 08:57:19.592717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.468 [2024-05-15 08:57:19.665771] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.468 [2024-05-15 08:57:19.665830] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.468 [2024-05-15 08:57:19.665844] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.468 [2024-05-15 08:57:19.665855] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.468 [2024-05-15 08:57:19.665864] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.468 [2024-05-15 08:57:19.669606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.468 [2024-05-15 08:57:19.669707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.468 [2024-05-15 08:57:19.669827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.468 [2024-05-15 08:57:19.669819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.403 [2024-05-15 08:57:20.486850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.403 Malloc0 00:17:04.403 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.404 [2024-05-15 08:57:20.572026] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:04.404 [2024-05-15 08:57:20.572364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.404 [ 00:17:04.404 { 00:17:04.404 "allow_any_host": true, 00:17:04.404 "hosts": [], 00:17:04.404 "listen_addresses": [ 00:17:04.404 { 00:17:04.404 "adrfam": "IPv4", 00:17:04.404 "traddr": "10.0.0.2", 00:17:04.404 "trsvcid": "4420", 00:17:04.404 "trtype": "TCP" 00:17:04.404 } 00:17:04.404 ], 00:17:04.404 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:04.404 "subtype": "Discovery" 00:17:04.404 }, 00:17:04.404 { 00:17:04.404 "allow_any_host": true, 00:17:04.404 "hosts": [], 00:17:04.404 "listen_addresses": [ 00:17:04.404 { 00:17:04.404 "adrfam": "IPv4", 00:17:04.404 "traddr": "10.0.0.2", 00:17:04.404 "trsvcid": "4420", 00:17:04.404 "trtype": "TCP" 00:17:04.404 } 00:17:04.404 ], 00:17:04.404 "max_cntlid": 65519, 00:17:04.404 "max_namespaces": 32, 00:17:04.404 "min_cntlid": 1, 00:17:04.404 "model_number": "SPDK bdev Controller", 00:17:04.404 "namespaces": [ 00:17:04.404 { 00:17:04.404 "bdev_name": "Malloc0", 00:17:04.404 "eui64": "ABCDEF0123456789", 00:17:04.404 "name": "Malloc0", 00:17:04.404 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:04.404 "nsid": 1, 00:17:04.404 "uuid": "bab776e7-a2a0-4c29-aac9-8bfffab981be" 00:17:04.404 } 00:17:04.404 ], 00:17:04.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.404 "serial_number": "SPDK00000000000001", 00:17:04.404 "subtype": "NVMe" 00:17:04.404 } 00:17:04.404 ] 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.404 08:57:20 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:04.404 [2024-05-15 08:57:20.617635] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:04.404 [2024-05-15 08:57:20.617674] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80604 ] 00:17:04.664 [2024-05-15 08:57:20.752958] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:04.664 [2024-05-15 08:57:20.753040] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:04.664 [2024-05-15 08:57:20.753048] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:04.664 [2024-05-15 08:57:20.753063] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:04.664 [2024-05-15 08:57:20.753077] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:04.664 [2024-05-15 08:57:20.753232] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:04.664 [2024-05-15 08:57:20.753305] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xaa8280 0 00:17:04.665 [2024-05-15 08:57:20.757586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:04.665 [2024-05-15 08:57:20.757609] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:04.665 [2024-05-15 08:57:20.757616] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:04.665 [2024-05-15 08:57:20.757620] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:04.665 [2024-05-15 08:57:20.757666] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.757674] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.757679] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaa8280) 00:17:04.665 [2024-05-15 08:57:20.757695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:04.665 [2024-05-15 08:57:20.757728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0950, cid 0, qid 0 00:17:04.665 [2024-05-15 08:57:20.762592] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.665 [2024-05-15 08:57:20.762622] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.665 [2024-05-15 08:57:20.762629] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.762634] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0950) on tqpair=0xaa8280 00:17:04.665 [2024-05-15 08:57:20.762646] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:04.665 [2024-05-15 08:57:20.762655] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:04.665 [2024-05-15 08:57:20.762662] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:04.665 [2024-05-15 08:57:20.762680] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.762686] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.762691] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaa8280) 00:17:04.665 [2024-05-15 08:57:20.762709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.665 [2024-05-15 08:57:20.762741] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0950, cid 0, qid 0 00:17:04.665 [2024-05-15 08:57:20.762824] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.665 [2024-05-15 08:57:20.762841] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.665 [2024-05-15 08:57:20.762849] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.762857] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0950) on tqpair=0xaa8280 00:17:04.665 [2024-05-15 08:57:20.762864] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:04.665 [2024-05-15 08:57:20.762874] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:04.665 [2024-05-15 08:57:20.762883] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.762887] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.762892] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaa8280) 00:17:04.665 [2024-05-15 08:57:20.762902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.665 [2024-05-15 08:57:20.762933] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0950, cid 0, qid 0 00:17:04.665 [2024-05-15 08:57:20.762995] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.665 [2024-05-15 08:57:20.763010] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.665 [2024-05-15 08:57:20.763017] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763022] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0950) on tqpair=0xaa8280 00:17:04.665 [2024-05-15 08:57:20.763029] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:04.665 [2024-05-15 08:57:20.763039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:04.665 [2024-05-15 08:57:20.763048] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763053] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763057] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaa8280) 00:17:04.665 [2024-05-15 08:57:20.763069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.665 [2024-05-15 08:57:20.763096] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0950, cid 0, qid 0 00:17:04.665 [2024-05-15 08:57:20.763164] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.665 [2024-05-15 08:57:20.763175] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.665 [2024-05-15 08:57:20.763180] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763184] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0950) on tqpair=0xaa8280 00:17:04.665 [2024-05-15 08:57:20.763192] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:04.665 [2024-05-15 08:57:20.763209] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763218] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763223] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaa8280) 00:17:04.665 [2024-05-15 08:57:20.763232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.665 [2024-05-15 08:57:20.763261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0950, cid 0, qid 0 00:17:04.665 [2024-05-15 08:57:20.763323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.665 [2024-05-15 08:57:20.763340] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.665 [2024-05-15 08:57:20.763346] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763351] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0950) on tqpair=0xaa8280 00:17:04.665 [2024-05-15 08:57:20.763357] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:04.665 [2024-05-15 08:57:20.763363] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:04.665 [2024-05-15 08:57:20.763373] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:04.665 [2024-05-15 08:57:20.763482] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:04.665 [2024-05-15 08:57:20.763492] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:04.665 [2024-05-15 08:57:20.763503] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763508] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763512] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaa8280) 00:17:04.665 [2024-05-15 08:57:20.763521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.665 [2024-05-15 08:57:20.763550] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0950, cid 0, qid 0 00:17:04.665 [2024-05-15 08:57:20.763626] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.665 [2024-05-15 08:57:20.763639] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.665 [2024-05-15 08:57:20.763643] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763648] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0950) on tqpair=0xaa8280 00:17:04.665 [2024-05-15 08:57:20.763654] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:04.665 [2024-05-15 08:57:20.763666] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763671] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763676] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaa8280) 00:17:04.665 [2024-05-15 08:57:20.763684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.665 [2024-05-15 08:57:20.763713] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0950, cid 0, qid 0 00:17:04.665 [2024-05-15 08:57:20.763771] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.665 [2024-05-15 08:57:20.763784] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.665 [2024-05-15 08:57:20.763789] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763793] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0950) on tqpair=0xaa8280 00:17:04.665 [2024-05-15 08:57:20.763799] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:04.665 [2024-05-15 08:57:20.763805] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:04.665 [2024-05-15 08:57:20.763814] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:04.665 [2024-05-15 08:57:20.763832] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:04.665 [2024-05-15 08:57:20.763844] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.665 [2024-05-15 08:57:20.763850] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaa8280) 00:17:04.665 [2024-05-15 08:57:20.763862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.665 [2024-05-15 08:57:20.763895] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0950, cid 0, qid 0 00:17:04.665 [2024-05-15 08:57:20.764006] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.665 [2024-05-15 08:57:20.764017] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.665 [2024-05-15 08:57:20.764022] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764028] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaa8280): datao=0, datal=4096, cccid=0 00:17:04.666 [2024-05-15 08:57:20.764037] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaf0950) on tqpair(0xaa8280): expected_datao=0, payload_size=4096 00:17:04.666 [2024-05-15 08:57:20.764046] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764060] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764069] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.666 [2024-05-15 08:57:20.764091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.666 [2024-05-15 08:57:20.764095] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764100] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0950) on tqpair=0xaa8280 00:17:04.666 [2024-05-15 08:57:20.764122] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:04.666 [2024-05-15 08:57:20.764133] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:04.666 [2024-05-15 08:57:20.764141] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:04.666 [2024-05-15 08:57:20.764150] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:04.666 [2024-05-15 08:57:20.764156] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:04.666 [2024-05-15 08:57:20.764162] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:04.666 [2024-05-15 08:57:20.764173] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:04.666 [2024-05-15 08:57:20.764193] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764203] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764208] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaa8280) 00:17:04.666 [2024-05-15 08:57:20.764218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.666 [2024-05-15 08:57:20.764246] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0950, cid 0, qid 0 00:17:04.666 [2024-05-15 08:57:20.764322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.666 [2024-05-15 08:57:20.764335] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.666 [2024-05-15 08:57:20.764341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764345] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0950) on tqpair=0xaa8280 00:17:04.666 [2024-05-15 08:57:20.764356] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764364] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764369] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaa8280) 00:17:04.666 [2024-05-15 08:57:20.764376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.666 [2024-05-15 08:57:20.764384] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764388] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764395] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xaa8280) 00:17:04.666 [2024-05-15 08:57:20.764405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.666 [2024-05-15 08:57:20.764416] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764424] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764431] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xaa8280) 00:17:04.666 [2024-05-15 08:57:20.764441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.666 [2024-05-15 08:57:20.764449] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764453] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764457] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.666 [2024-05-15 08:57:20.764464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.666 [2024-05-15 08:57:20.764470] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:04.666 [2024-05-15 08:57:20.764485] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:04.666 [2024-05-15 08:57:20.764493] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764498] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaa8280) 00:17:04.666 [2024-05-15 08:57:20.764509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.666 [2024-05-15 08:57:20.764544] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0950, cid 0, qid 0 00:17:04.666 [2024-05-15 08:57:20.764557] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0ab0, cid 1, qid 0 00:17:04.666 [2024-05-15 08:57:20.764579] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0c10, cid 2, qid 0 00:17:04.666 [2024-05-15 08:57:20.764586] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.666 [2024-05-15 08:57:20.764592] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0ed0, cid 4, qid 0 00:17:04.666 [2024-05-15 08:57:20.764684] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.666 [2024-05-15 08:57:20.764710] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.666 [2024-05-15 08:57:20.764717] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764722] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0ed0) on tqpair=0xaa8280 00:17:04.666 [2024-05-15 08:57:20.764728] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:04.666 [2024-05-15 08:57:20.764735] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:04.666 [2024-05-15 08:57:20.764749] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764754] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaa8280) 00:17:04.666 [2024-05-15 08:57:20.764763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.666 [2024-05-15 08:57:20.764789] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0ed0, cid 4, qid 0 00:17:04.666 [2024-05-15 08:57:20.764861] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.666 [2024-05-15 08:57:20.764874] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.666 [2024-05-15 08:57:20.764883] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764890] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaa8280): datao=0, datal=4096, cccid=4 00:17:04.666 [2024-05-15 08:57:20.764896] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaf0ed0) on tqpair(0xaa8280): expected_datao=0, payload_size=4096 00:17:04.666 [2024-05-15 08:57:20.764901] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764909] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764914] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764924] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.666 [2024-05-15 08:57:20.764933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.666 [2024-05-15 08:57:20.764941] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.764947] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0ed0) on tqpair=0xaa8280 00:17:04.666 [2024-05-15 08:57:20.764969] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:04.666 [2024-05-15 08:57:20.765006] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.765013] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaa8280) 00:17:04.666 [2024-05-15 08:57:20.765022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.666 [2024-05-15 08:57:20.765030] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.765035] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.765040] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaa8280) 00:17:04.666 [2024-05-15 08:57:20.765050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.666 [2024-05-15 08:57:20.765084] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0ed0, cid 4, qid 0 00:17:04.666 [2024-05-15 08:57:20.765097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf1030, cid 5, qid 0 00:17:04.666 [2024-05-15 08:57:20.765203] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.666 [2024-05-15 08:57:20.765213] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.666 [2024-05-15 08:57:20.765219] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.666 [2024-05-15 08:57:20.765223] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaa8280): datao=0, datal=1024, cccid=4 00:17:04.667 [2024-05-15 08:57:20.765229] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaf0ed0) on tqpair(0xaa8280): expected_datao=0, payload_size=1024 00:17:04.667 [2024-05-15 08:57:20.765234] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.765245] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.765253] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.765262] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.667 [2024-05-15 08:57:20.765270] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.667 [2024-05-15 08:57:20.765277] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.765284] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf1030) on tqpair=0xaa8280 00:17:04.667 [2024-05-15 08:57:20.805678] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.667 [2024-05-15 08:57:20.805724] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.667 [2024-05-15 08:57:20.805733] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.805739] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0ed0) on tqpair=0xaa8280 00:17:04.667 [2024-05-15 08:57:20.805777] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.805784] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaa8280) 00:17:04.667 [2024-05-15 08:57:20.805798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.667 [2024-05-15 08:57:20.805838] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0ed0, cid 4, qid 0 00:17:04.667 [2024-05-15 08:57:20.805972] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.667 [2024-05-15 08:57:20.805980] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.667 [2024-05-15 08:57:20.805985] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.805989] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaa8280): datao=0, datal=3072, cccid=4 00:17:04.667 [2024-05-15 08:57:20.805995] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaf0ed0) on tqpair(0xaa8280): expected_datao=0, payload_size=3072 00:17:04.667 [2024-05-15 08:57:20.806000] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.806012] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.806020] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.806033] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.667 [2024-05-15 08:57:20.806043] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.667 [2024-05-15 08:57:20.806050] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.806056] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0ed0) on tqpair=0xaa8280 00:17:04.667 [2024-05-15 08:57:20.806073] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.806081] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaa8280) 00:17:04.667 [2024-05-15 08:57:20.806093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.667 [2024-05-15 08:57:20.806136] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0ed0, cid 4, qid 0 00:17:04.667 [2024-05-15 08:57:20.806220] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.667 [2024-05-15 08:57:20.806234] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.667 [2024-05-15 08:57:20.806242] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.806246] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaa8280): datao=0, datal=8, cccid=4 00:17:04.667 [2024-05-15 08:57:20.806253] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaf0ed0) on tqpair(0xaa8280): expected_datao=0, payload_size=8 00:17:04.667 [2024-05-15 08:57:20.806261] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.806273] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.667 [2024-05-15 08:57:20.806282] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.667 ===================================================== 00:17:04.667 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:04.667 ===================================================== 00:17:04.667 Controller Capabilities/Features 00:17:04.667 ================================ 00:17:04.667 Vendor ID: 0000 00:17:04.667 Subsystem Vendor ID: 0000 00:17:04.667 Serial Number: .................... 00:17:04.667 Model Number: ........................................ 00:17:04.667 Firmware Version: 24.05 00:17:04.667 Recommended Arb Burst: 0 00:17:04.667 IEEE OUI Identifier: 00 00 00 00:17:04.667 Multi-path I/O 00:17:04.667 May have multiple subsystem ports: No 00:17:04.667 May have multiple controllers: No 00:17:04.667 Associated with SR-IOV VF: No 00:17:04.667 Max Data Transfer Size: 131072 00:17:04.667 Max Number of Namespaces: 0 00:17:04.667 Max Number of I/O Queues: 1024 00:17:04.667 NVMe Specification Version (VS): 1.3 00:17:04.667 NVMe Specification Version (Identify): 1.3 00:17:04.667 Maximum Queue Entries: 128 00:17:04.667 Contiguous Queues Required: Yes 00:17:04.667 Arbitration Mechanisms Supported 00:17:04.667 Weighted Round Robin: Not Supported 00:17:04.667 Vendor Specific: Not Supported 00:17:04.667 Reset Timeout: 15000 ms 00:17:04.667 Doorbell Stride: 4 bytes 00:17:04.667 NVM Subsystem Reset: Not Supported 00:17:04.667 Command Sets Supported 00:17:04.667 NVM Command Set: Supported 00:17:04.667 Boot Partition: Not Supported 00:17:04.667 Memory Page Size Minimum: 4096 bytes 00:17:04.667 Memory Page Size Maximum: 4096 bytes 00:17:04.667 Persistent Memory Region: Not Supported 00:17:04.667 Optional Asynchronous Events Supported 00:17:04.667 Namespace Attribute Notices: Not Supported 00:17:04.667 Firmware Activation Notices: Not Supported 00:17:04.667 ANA Change Notices: Not Supported 00:17:04.667 PLE Aggregate Log Change Notices: Not Supported 00:17:04.667 LBA Status Info Alert Notices: Not Supported 00:17:04.667 EGE Aggregate Log Change Notices: Not Supported 00:17:04.667 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.667 Zone Descriptor Change Notices: Not Supported 00:17:04.667 Discovery Log Change Notices: Supported 00:17:04.667 Controller Attributes 00:17:04.667 128-bit Host Identifier: Not Supported 00:17:04.667 Non-Operational Permissive Mode: Not Supported 00:17:04.667 NVM Sets: Not Supported 00:17:04.667 Read Recovery Levels: Not Supported 00:17:04.667 Endurance Groups: Not Supported 00:17:04.667 Predictable Latency Mode: Not Supported 00:17:04.667 Traffic Based Keep ALive: Not Supported 00:17:04.667 Namespace Granularity: Not Supported 00:17:04.667 SQ Associations: Not Supported 00:17:04.667 UUID List: Not Supported 00:17:04.667 Multi-Domain Subsystem: Not Supported 00:17:04.667 Fixed Capacity Management: Not Supported 00:17:04.667 Variable Capacity Management: Not Supported 00:17:04.667 Delete Endurance Group: Not Supported 00:17:04.667 Delete NVM Set: Not Supported 00:17:04.667 Extended LBA Formats Supported: Not Supported 00:17:04.667 Flexible Data Placement Supported: Not Supported 00:17:04.667 00:17:04.667 Controller Memory Buffer Support 00:17:04.667 ================================ 00:17:04.667 Supported: No 00:17:04.667 00:17:04.667 Persistent Memory Region Support 00:17:04.667 ================================ 00:17:04.667 Supported: No 00:17:04.667 00:17:04.667 Admin Command Set Attributes 00:17:04.667 ============================ 00:17:04.667 Security Send/Receive: Not Supported 00:17:04.667 Format NVM: Not Supported 00:17:04.667 Firmware Activate/Download: Not Supported 00:17:04.667 Namespace Management: Not Supported 00:17:04.667 Device Self-Test: Not Supported 00:17:04.667 Directives: Not Supported 00:17:04.667 NVMe-MI: Not Supported 00:17:04.667 Virtualization Management: Not Supported 00:17:04.667 Doorbell Buffer Config: Not Supported 00:17:04.667 Get LBA Status Capability: Not Supported 00:17:04.667 Command & Feature Lockdown Capability: Not Supported 00:17:04.667 Abort Command Limit: 1 00:17:04.667 Async Event Request Limit: 4 00:17:04.667 Number of Firmware Slots: N/A 00:17:04.667 Firmware Slot 1 Read-Only: N/A 00:17:04.667 Firmware Activation Without Reset: N/A 00:17:04.667 Multiple Update Detection Support: N/A 00:17:04.667 Firmware Update Granularity: No Information Provided 00:17:04.667 Per-Namespace SMART Log: No 00:17:04.667 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.667 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:04.667 Command Effects Log Page: Not Supported 00:17:04.667 Get Log Page Extended Data: Supported 00:17:04.667 Telemetry Log Pages: Not Supported 00:17:04.667 Persistent Event Log Pages: Not Supported 00:17:04.667 Supported Log Pages Log Page: May Support 00:17:04.667 Commands Supported & Effects Log Page: Not Supported 00:17:04.667 Feature Identifiers & Effects Log Page:May Support 00:17:04.667 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.667 Data Area 4 for Telemetry Log: Not Supported 00:17:04.667 Error Log Page Entries Supported: 128 00:17:04.667 Keep Alive: Not Supported 00:17:04.667 00:17:04.667 NVM Command Set Attributes 00:17:04.667 ========================== 00:17:04.667 Submission Queue Entry Size 00:17:04.667 Max: 1 00:17:04.667 Min: 1 00:17:04.667 Completion Queue Entry Size 00:17:04.668 Max: 1 00:17:04.668 Min: 1 00:17:04.668 Number of Namespaces: 0 00:17:04.668 Compare Command: Not Supported 00:17:04.668 Write Uncorrectable Command: Not Supported 00:17:04.668 Dataset Management Command: Not Supported 00:17:04.668 Write Zeroes Command: Not Supported 00:17:04.668 Set Features Save Field: Not Supported 00:17:04.668 Reservations: Not Supported 00:17:04.668 Timestamp: Not Supported 00:17:04.668 Copy: Not Supported 00:17:04.668 Volatile Write Cache: Not Present 00:17:04.668 Atomic Write Unit (Normal): 1 00:17:04.668 Atomic Write Unit (PFail): 1 00:17:04.668 Atomic Compare & Write Unit: 1 00:17:04.668 Fused Compare & Write: Supported 00:17:04.668 Scatter-Gather List 00:17:04.668 SGL Command Set: Supported 00:17:04.668 SGL Keyed: Supported 00:17:04.668 SGL Bit Bucket Descriptor: Not Supported 00:17:04.668 SGL Metadata Pointer: Not Supported 00:17:04.668 Oversized SGL: Not Supported 00:17:04.668 SGL Metadata Address: Not Supported 00:17:04.668 SGL Offset: Supported 00:17:04.668 Transport SGL Data Block: Not Supported 00:17:04.668 Replay Protected Memory Block: Not Supported 00:17:04.668 00:17:04.668 Firmware Slot Information 00:17:04.668 ========================= 00:17:04.668 Active slot: 0 00:17:04.668 00:17:04.668 00:17:04.668 Error Log 00:17:04.668 ========= 00:17:04.668 00:17:04.668 Active Namespaces 00:17:04.668 ================= 00:17:04.668 Discovery Log Page 00:17:04.668 ================== 00:17:04.668 Generation Counter: 2 00:17:04.668 Number of Records: 2 00:17:04.668 Record Format: 0 00:17:04.668 00:17:04.668 Discovery Log Entry 0 00:17:04.668 ---------------------- 00:17:04.668 Transport Type: 3 (TCP) 00:17:04.668 Address Family: 1 (IPv4) 00:17:04.668 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:04.668 Entry Flags: 00:17:04.668 Duplicate Returned Information: 1 00:17:04.668 Explicit Persistent Connection Support for Discovery: 1 00:17:04.668 Transport Requirements: 00:17:04.668 Secure Channel: Not Required 00:17:04.668 Port ID: 0 (0x0000) 00:17:04.668 Controller ID: 65535 (0xffff) 00:17:04.668 Admin Max SQ Size: 128 00:17:04.668 Transport Service Identifier: 4420 00:17:04.668 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:04.668 Transport Address: 10.0.0.2 00:17:04.668 Discovery Log Entry 1 00:17:04.668 ---------------------- 00:17:04.668 Transport Type: 3 (TCP) 00:17:04.668 Address Family: 1 (IPv4) 00:17:04.668 Subsystem Type: 2 (NVM Subsystem) 00:17:04.668 Entry Flags: 00:17:04.668 Duplicate Returned Information: 0 00:17:04.668 Explicit Persistent Connection Support for Discovery: 0 00:17:04.668 Transport Requirements: 00:17:04.668 Secure Channel: Not Required 00:17:04.668 Port ID: 0 (0x0000) 00:17:04.668 Controller ID: 65535 (0xffff) 00:17:04.668 Admin Max SQ Size: 128 00:17:04.668 Transport Service Identifier: 4420 00:17:04.668 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:04.668 Transport Address: 10.0.0.2 [2024-05-15 08:57:20.846695] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.668 [2024-05-15 08:57:20.846724] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.668 [2024-05-15 08:57:20.846731] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.846737] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0ed0) on tqpair=0xaa8280 00:17:04.668 [2024-05-15 08:57:20.846856] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:04.668 [2024-05-15 08:57:20.846876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.668 [2024-05-15 08:57:20.846885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.668 [2024-05-15 08:57:20.846892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.668 [2024-05-15 08:57:20.846899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.668 [2024-05-15 08:57:20.846914] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.846919] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.846924] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.668 [2024-05-15 08:57:20.846936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.668 [2024-05-15 08:57:20.846965] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.668 [2024-05-15 08:57:20.847062] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.668 [2024-05-15 08:57:20.847070] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.668 [2024-05-15 08:57:20.847075] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847079] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.668 [2024-05-15 08:57:20.847100] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847105] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847109] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.668 [2024-05-15 08:57:20.847117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.668 [2024-05-15 08:57:20.847143] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.668 [2024-05-15 08:57:20.847233] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.668 [2024-05-15 08:57:20.847241] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.668 [2024-05-15 08:57:20.847246] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847251] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.668 [2024-05-15 08:57:20.847256] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:04.668 [2024-05-15 08:57:20.847262] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:04.668 [2024-05-15 08:57:20.847273] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847278] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847282] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.668 [2024-05-15 08:57:20.847291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.668 [2024-05-15 08:57:20.847311] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.668 [2024-05-15 08:57:20.847375] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.668 [2024-05-15 08:57:20.847384] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.668 [2024-05-15 08:57:20.847389] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847394] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.668 [2024-05-15 08:57:20.847406] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847411] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847416] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.668 [2024-05-15 08:57:20.847424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.668 [2024-05-15 08:57:20.847444] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.668 [2024-05-15 08:57:20.847512] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.668 [2024-05-15 08:57:20.847520] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.668 [2024-05-15 08:57:20.847525] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847529] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.668 [2024-05-15 08:57:20.847541] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847546] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847550] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.668 [2024-05-15 08:57:20.847558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.668 [2024-05-15 08:57:20.847594] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.668 [2024-05-15 08:57:20.847658] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.668 [2024-05-15 08:57:20.847666] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.668 [2024-05-15 08:57:20.847671] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847676] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.668 [2024-05-15 08:57:20.847687] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.668 [2024-05-15 08:57:20.847696] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.668 [2024-05-15 08:57:20.847704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.668 [2024-05-15 08:57:20.847724] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.668 [2024-05-15 08:57:20.847785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.668 [2024-05-15 08:57:20.847794] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.669 [2024-05-15 08:57:20.847799] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.847803] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.669 [2024-05-15 08:57:20.847814] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.847819] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.847824] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.669 [2024-05-15 08:57:20.847831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.669 [2024-05-15 08:57:20.847851] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.669 [2024-05-15 08:57:20.847920] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.669 [2024-05-15 08:57:20.847928] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.669 [2024-05-15 08:57:20.847934] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.847939] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.669 [2024-05-15 08:57:20.847950] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.847955] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.847959] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.669 [2024-05-15 08:57:20.847967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.669 [2024-05-15 08:57:20.847987] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.669 [2024-05-15 08:57:20.848044] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.669 [2024-05-15 08:57:20.848053] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.669 [2024-05-15 08:57:20.848057] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848062] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.669 [2024-05-15 08:57:20.848073] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848078] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848082] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.669 [2024-05-15 08:57:20.848090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.669 [2024-05-15 08:57:20.848119] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.669 [2024-05-15 08:57:20.848181] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.669 [2024-05-15 08:57:20.848189] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.669 [2024-05-15 08:57:20.848193] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848198] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.669 [2024-05-15 08:57:20.848210] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848215] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848219] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.669 [2024-05-15 08:57:20.848227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.669 [2024-05-15 08:57:20.848254] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.669 [2024-05-15 08:57:20.848307] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.669 [2024-05-15 08:57:20.848315] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.669 [2024-05-15 08:57:20.848320] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848324] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.669 [2024-05-15 08:57:20.848336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848341] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848345] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.669 [2024-05-15 08:57:20.848353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.669 [2024-05-15 08:57:20.848372] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.669 [2024-05-15 08:57:20.848425] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.669 [2024-05-15 08:57:20.848433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.669 [2024-05-15 08:57:20.848437] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848443] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.669 [2024-05-15 08:57:20.848455] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848460] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.848464] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.669 [2024-05-15 08:57:20.848472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.669 [2024-05-15 08:57:20.848491] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.669 [2024-05-15 08:57:20.848553] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.669 [2024-05-15 08:57:20.848561] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.669 [2024-05-15 08:57:20.852588] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.852598] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.669 [2024-05-15 08:57:20.852616] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.852623] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.852627] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaa8280) 00:17:04.669 [2024-05-15 08:57:20.852637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.669 [2024-05-15 08:57:20.852667] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaf0d70, cid 3, qid 0 00:17:04.669 [2024-05-15 08:57:20.852751] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.669 [2024-05-15 08:57:20.852765] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.669 [2024-05-15 08:57:20.852772] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.669 [2024-05-15 08:57:20.852779] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaf0d70) on tqpair=0xaa8280 00:17:04.669 [2024-05-15 08:57:20.852794] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:17:04.669 00:17:04.669 08:57:20 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:04.669 [2024-05-15 08:57:20.888250] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:04.669 [2024-05-15 08:57:20.888311] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80612 ] 00:17:04.931 [2024-05-15 08:57:21.033836] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:04.931 [2024-05-15 08:57:21.033915] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:04.931 [2024-05-15 08:57:21.033924] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:04.931 [2024-05-15 08:57:21.033938] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:04.931 [2024-05-15 08:57:21.033953] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:04.931 [2024-05-15 08:57:21.034104] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:04.931 [2024-05-15 08:57:21.034174] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7e0280 0 00:17:04.931 [2024-05-15 08:57:21.041592] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:04.931 [2024-05-15 08:57:21.041621] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:04.931 [2024-05-15 08:57:21.041628] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:04.931 [2024-05-15 08:57:21.041632] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:04.931 [2024-05-15 08:57:21.041679] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.931 [2024-05-15 08:57:21.041688] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.931 [2024-05-15 08:57:21.041692] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e0280) 00:17:04.931 [2024-05-15 08:57:21.041708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:04.931 [2024-05-15 08:57:21.041743] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828950, cid 0, qid 0 00:17:04.931 [2024-05-15 08:57:21.049589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.931 [2024-05-15 08:57:21.049615] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.931 [2024-05-15 08:57:21.049622] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.931 [2024-05-15 08:57:21.049628] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828950) on tqpair=0x7e0280 00:17:04.931 [2024-05-15 08:57:21.049643] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:04.931 [2024-05-15 08:57:21.049652] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:04.931 [2024-05-15 08:57:21.049659] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:04.931 [2024-05-15 08:57:21.049678] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.931 [2024-05-15 08:57:21.049684] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.931 [2024-05-15 08:57:21.049689] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e0280) 00:17:04.931 [2024-05-15 08:57:21.049700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.931 [2024-05-15 08:57:21.049733] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828950, cid 0, qid 0 00:17:04.931 [2024-05-15 08:57:21.049814] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.931 [2024-05-15 08:57:21.049827] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.931 [2024-05-15 08:57:21.049831] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.931 [2024-05-15 08:57:21.049836] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828950) on tqpair=0x7e0280 00:17:04.932 [2024-05-15 08:57:21.049842] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:04.932 [2024-05-15 08:57:21.049852] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:04.932 [2024-05-15 08:57:21.049863] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.049871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.049878] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e0280) 00:17:04.932 [2024-05-15 08:57:21.049891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.932 [2024-05-15 08:57:21.049922] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828950, cid 0, qid 0 00:17:04.932 [2024-05-15 08:57:21.049979] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.932 [2024-05-15 08:57:21.049992] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.932 [2024-05-15 08:57:21.050000] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050007] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828950) on tqpair=0x7e0280 00:17:04.932 [2024-05-15 08:57:21.050017] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:04.932 [2024-05-15 08:57:21.050028] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:04.932 [2024-05-15 08:57:21.050037] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050042] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050046] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e0280) 00:17:04.932 [2024-05-15 08:57:21.050054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.932 [2024-05-15 08:57:21.050083] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828950, cid 0, qid 0 00:17:04.932 [2024-05-15 08:57:21.050144] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.932 [2024-05-15 08:57:21.050157] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.932 [2024-05-15 08:57:21.050165] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050172] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828950) on tqpair=0x7e0280 00:17:04.932 [2024-05-15 08:57:21.050183] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:04.932 [2024-05-15 08:57:21.050201] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050208] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050212] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e0280) 00:17:04.932 [2024-05-15 08:57:21.050221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.932 [2024-05-15 08:57:21.050250] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828950, cid 0, qid 0 00:17:04.932 [2024-05-15 08:57:21.050306] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.932 [2024-05-15 08:57:21.050321] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.932 [2024-05-15 08:57:21.050328] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050332] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828950) on tqpair=0x7e0280 00:17:04.932 [2024-05-15 08:57:21.050338] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:04.932 [2024-05-15 08:57:21.050347] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:04.932 [2024-05-15 08:57:21.050359] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:04.932 [2024-05-15 08:57:21.050467] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:04.932 [2024-05-15 08:57:21.050478] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:04.932 [2024-05-15 08:57:21.050491] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050496] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050500] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e0280) 00:17:04.932 [2024-05-15 08:57:21.050509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.932 [2024-05-15 08:57:21.050533] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828950, cid 0, qid 0 00:17:04.932 [2024-05-15 08:57:21.050608] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.932 [2024-05-15 08:57:21.050623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.932 [2024-05-15 08:57:21.050630] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050638] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828950) on tqpair=0x7e0280 00:17:04.932 [2024-05-15 08:57:21.050647] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:04.932 [2024-05-15 08:57:21.050662] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050667] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050672] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e0280) 00:17:04.932 [2024-05-15 08:57:21.050681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.932 [2024-05-15 08:57:21.050713] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828950, cid 0, qid 0 00:17:04.932 [2024-05-15 08:57:21.050769] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.932 [2024-05-15 08:57:21.050781] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.932 [2024-05-15 08:57:21.050785] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050790] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828950) on tqpair=0x7e0280 00:17:04.932 [2024-05-15 08:57:21.050796] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:04.932 [2024-05-15 08:57:21.050805] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:04.932 [2024-05-15 08:57:21.050820] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:04.932 [2024-05-15 08:57:21.050845] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:04.932 [2024-05-15 08:57:21.050862] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.050868] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e0280) 00:17:04.932 [2024-05-15 08:57:21.050877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.932 [2024-05-15 08:57:21.050902] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828950, cid 0, qid 0 00:17:04.932 [2024-05-15 08:57:21.051006] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.932 [2024-05-15 08:57:21.051021] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.932 [2024-05-15 08:57:21.051029] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051036] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e0280): datao=0, datal=4096, cccid=0 00:17:04.932 [2024-05-15 08:57:21.051045] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x828950) on tqpair(0x7e0280): expected_datao=0, payload_size=4096 00:17:04.932 [2024-05-15 08:57:21.051053] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051066] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051071] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051081] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.932 [2024-05-15 08:57:21.051088] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.932 [2024-05-15 08:57:21.051092] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051096] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828950) on tqpair=0x7e0280 00:17:04.932 [2024-05-15 08:57:21.051107] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:04.932 [2024-05-15 08:57:21.051116] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:04.932 [2024-05-15 08:57:21.051124] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:04.932 [2024-05-15 08:57:21.051129] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:04.932 [2024-05-15 08:57:21.051134] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:04.932 [2024-05-15 08:57:21.051140] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:04.932 [2024-05-15 08:57:21.051154] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:04.932 [2024-05-15 08:57:21.051175] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051186] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051192] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e0280) 00:17:04.932 [2024-05-15 08:57:21.051201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.932 [2024-05-15 08:57:21.051227] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828950, cid 0, qid 0 00:17:04.932 [2024-05-15 08:57:21.051302] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.932 [2024-05-15 08:57:21.051316] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.932 [2024-05-15 08:57:21.051323] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051330] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828950) on tqpair=0x7e0280 00:17:04.932 [2024-05-15 08:57:21.051342] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051347] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051351] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e0280) 00:17:04.932 [2024-05-15 08:57:21.051359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.932 [2024-05-15 08:57:21.051366] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051370] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051374] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7e0280) 00:17:04.932 [2024-05-15 08:57:21.051381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.932 [2024-05-15 08:57:21.051388] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051392] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.932 [2024-05-15 08:57:21.051397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7e0280) 00:17:04.933 [2024-05-15 08:57:21.051407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.933 [2024-05-15 08:57:21.051415] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.051419] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.051423] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.933 [2024-05-15 08:57:21.051429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.933 [2024-05-15 08:57:21.051435] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.051456] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.051471] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.051479] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e0280) 00:17:04.933 [2024-05-15 08:57:21.051490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.933 [2024-05-15 08:57:21.051518] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828950, cid 0, qid 0 00:17:04.933 [2024-05-15 08:57:21.051526] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828ab0, cid 1, qid 0 00:17:04.933 [2024-05-15 08:57:21.051532] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828c10, cid 2, qid 0 00:17:04.933 [2024-05-15 08:57:21.051537] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.933 [2024-05-15 08:57:21.051542] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828ed0, cid 4, qid 0 00:17:04.933 [2024-05-15 08:57:21.051658] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.933 [2024-05-15 08:57:21.051677] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.933 [2024-05-15 08:57:21.051682] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.051687] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828ed0) on tqpair=0x7e0280 00:17:04.933 [2024-05-15 08:57:21.051694] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:04.933 [2024-05-15 08:57:21.051700] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.051720] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.051729] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.051738] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.051746] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.051753] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e0280) 00:17:04.933 [2024-05-15 08:57:21.051766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.933 [2024-05-15 08:57:21.051801] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828ed0, cid 4, qid 0 00:17:04.933 [2024-05-15 08:57:21.051868] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.933 [2024-05-15 08:57:21.051878] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.933 [2024-05-15 08:57:21.051882] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.051887] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828ed0) on tqpair=0x7e0280 00:17:04.933 [2024-05-15 08:57:21.051953] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.051976] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.051990] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.051995] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e0280) 00:17:04.933 [2024-05-15 08:57:21.052004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.933 [2024-05-15 08:57:21.052039] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828ed0, cid 4, qid 0 00:17:04.933 [2024-05-15 08:57:21.052125] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.933 [2024-05-15 08:57:21.052136] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.933 [2024-05-15 08:57:21.052142] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052148] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e0280): datao=0, datal=4096, cccid=4 00:17:04.933 [2024-05-15 08:57:21.052157] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x828ed0) on tqpair(0x7e0280): expected_datao=0, payload_size=4096 00:17:04.933 [2024-05-15 08:57:21.052166] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052178] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052184] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052194] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.933 [2024-05-15 08:57:21.052203] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.933 [2024-05-15 08:57:21.052210] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052216] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828ed0) on tqpair=0x7e0280 00:17:04.933 [2024-05-15 08:57:21.052241] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:04.933 [2024-05-15 08:57:21.052269] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.052283] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.052294] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052299] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e0280) 00:17:04.933 [2024-05-15 08:57:21.052308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.933 [2024-05-15 08:57:21.052338] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828ed0, cid 4, qid 0 00:17:04.933 [2024-05-15 08:57:21.052443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.933 [2024-05-15 08:57:21.052453] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.933 [2024-05-15 08:57:21.052458] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052462] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e0280): datao=0, datal=4096, cccid=4 00:17:04.933 [2024-05-15 08:57:21.052467] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x828ed0) on tqpair(0x7e0280): expected_datao=0, payload_size=4096 00:17:04.933 [2024-05-15 08:57:21.052472] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052480] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052485] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052494] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.933 [2024-05-15 08:57:21.052501] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.933 [2024-05-15 08:57:21.052505] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052512] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828ed0) on tqpair=0x7e0280 00:17:04.933 [2024-05-15 08:57:21.052534] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.052547] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.052581] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052593] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e0280) 00:17:04.933 [2024-05-15 08:57:21.052602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.933 [2024-05-15 08:57:21.052630] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828ed0, cid 4, qid 0 00:17:04.933 [2024-05-15 08:57:21.052711] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.933 [2024-05-15 08:57:21.052724] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.933 [2024-05-15 08:57:21.052732] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052739] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e0280): datao=0, datal=4096, cccid=4 00:17:04.933 [2024-05-15 08:57:21.052747] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x828ed0) on tqpair(0x7e0280): expected_datao=0, payload_size=4096 00:17:04.933 [2024-05-15 08:57:21.052755] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052765] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052770] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052780] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.933 [2024-05-15 08:57:21.052786] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.933 [2024-05-15 08:57:21.052791] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.933 [2024-05-15 08:57:21.052795] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828ed0) on tqpair=0x7e0280 00:17:04.933 [2024-05-15 08:57:21.052808] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.052823] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:04.933 [2024-05-15 08:57:21.052841] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:04.934 [2024-05-15 08:57:21.052850] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:04.934 [2024-05-15 08:57:21.052859] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:04.934 [2024-05-15 08:57:21.052869] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:04.934 [2024-05-15 08:57:21.052877] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:04.934 [2024-05-15 08:57:21.052886] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:04.934 [2024-05-15 08:57:21.052915] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.052922] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e0280) 00:17:04.934 [2024-05-15 08:57:21.052931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.934 [2024-05-15 08:57:21.052939] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.052943] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.052947] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7e0280) 00:17:04.934 [2024-05-15 08:57:21.052954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.934 [2024-05-15 08:57:21.052987] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828ed0, cid 4, qid 0 00:17:04.934 [2024-05-15 08:57:21.052998] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x829030, cid 5, qid 0 00:17:04.934 [2024-05-15 08:57:21.053074] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.934 [2024-05-15 08:57:21.053094] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.934 [2024-05-15 08:57:21.053100] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.053105] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828ed0) on tqpair=0x7e0280 00:17:04.934 [2024-05-15 08:57:21.053113] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.934 [2024-05-15 08:57:21.053120] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.934 [2024-05-15 08:57:21.053127] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.053134] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x829030) on tqpair=0x7e0280 00:17:04.934 [2024-05-15 08:57:21.053147] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.053154] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7e0280) 00:17:04.934 [2024-05-15 08:57:21.053166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.934 [2024-05-15 08:57:21.053198] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x829030, cid 5, qid 0 00:17:04.934 [2024-05-15 08:57:21.053263] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.934 [2024-05-15 08:57:21.053284] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.934 [2024-05-15 08:57:21.053290] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.053294] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x829030) on tqpair=0x7e0280 00:17:04.934 [2024-05-15 08:57:21.053309] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.053317] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7e0280) 00:17:04.934 [2024-05-15 08:57:21.053328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.934 [2024-05-15 08:57:21.053364] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x829030, cid 5, qid 0 00:17:04.934 [2024-05-15 08:57:21.053427] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.934 [2024-05-15 08:57:21.053436] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.934 [2024-05-15 08:57:21.053443] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.053450] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x829030) on tqpair=0x7e0280 00:17:04.934 [2024-05-15 08:57:21.053467] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.053473] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7e0280) 00:17:04.934 [2024-05-15 08:57:21.053481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.934 [2024-05-15 08:57:21.053508] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x829030, cid 5, qid 0 00:17:04.934 [2024-05-15 08:57:21.057576] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.934 [2024-05-15 08:57:21.057604] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.934 [2024-05-15 08:57:21.057611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.057615] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x829030) on tqpair=0x7e0280 00:17:04.934 [2024-05-15 08:57:21.057638] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.057644] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7e0280) 00:17:04.934 [2024-05-15 08:57:21.057654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.934 [2024-05-15 08:57:21.057662] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.057667] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e0280) 00:17:04.934 [2024-05-15 08:57:21.057674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.934 [2024-05-15 08:57:21.057682] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.057687] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7e0280) 00:17:04.934 [2024-05-15 08:57:21.057694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.934 [2024-05-15 08:57:21.057702] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.057707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7e0280) 00:17:04.934 [2024-05-15 08:57:21.057714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.934 [2024-05-15 08:57:21.057746] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x829030, cid 5, qid 0 00:17:04.934 [2024-05-15 08:57:21.057755] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828ed0, cid 4, qid 0 00:17:04.934 [2024-05-15 08:57:21.057761] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x829190, cid 6, qid 0 00:17:04.934 [2024-05-15 08:57:21.057766] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8292f0, cid 7, qid 0 00:17:04.934 [2024-05-15 08:57:21.057910] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.934 [2024-05-15 08:57:21.057922] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.934 [2024-05-15 08:57:21.057926] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.057930] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e0280): datao=0, datal=8192, cccid=5 00:17:04.934 [2024-05-15 08:57:21.057938] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x829030) on tqpair(0x7e0280): expected_datao=0, payload_size=8192 00:17:04.934 [2024-05-15 08:57:21.057945] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.057967] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.057975] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.057985] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.934 [2024-05-15 08:57:21.057996] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.934 [2024-05-15 08:57:21.058003] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058011] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e0280): datao=0, datal=512, cccid=4 00:17:04.934 [2024-05-15 08:57:21.058019] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x828ed0) on tqpair(0x7e0280): expected_datao=0, payload_size=512 00:17:04.934 [2024-05-15 08:57:21.058024] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058032] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058036] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058042] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.934 [2024-05-15 08:57:21.058049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.934 [2024-05-15 08:57:21.058053] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058057] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e0280): datao=0, datal=512, cccid=6 00:17:04.934 [2024-05-15 08:57:21.058062] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x829190) on tqpair(0x7e0280): expected_datao=0, payload_size=512 00:17:04.934 [2024-05-15 08:57:21.058066] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058073] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058077] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058084] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.934 [2024-05-15 08:57:21.058093] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.934 [2024-05-15 08:57:21.058099] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058103] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e0280): datao=0, datal=4096, cccid=7 00:17:04.934 [2024-05-15 08:57:21.058108] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8292f0) on tqpair(0x7e0280): expected_datao=0, payload_size=4096 00:17:04.934 [2024-05-15 08:57:21.058113] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058120] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058124] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058131] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.934 [2024-05-15 08:57:21.058140] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.934 [2024-05-15 08:57:21.058147] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058154] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x829030) on tqpair=0x7e0280 00:17:04.934 [2024-05-15 08:57:21.058181] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.934 [2024-05-15 08:57:21.058195] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.934 [2024-05-15 08:57:21.058202] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058209] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828ed0) on tqpair=0x7e0280 00:17:04.934 [2024-05-15 08:57:21.058223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.934 [2024-05-15 08:57:21.058230] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.934 [2024-05-15 08:57:21.058234] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.934 [2024-05-15 08:57:21.058238] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x829190) on tqpair=0x7e0280 00:17:04.935 [2024-05-15 08:57:21.058253] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.935 [2024-05-15 08:57:21.058262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.935 [2024-05-15 08:57:21.058266] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.935 [2024-05-15 08:57:21.058271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8292f0) on tqpair=0x7e0280 00:17:04.935 ===================================================== 00:17:04.935 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:04.935 ===================================================== 00:17:04.935 Controller Capabilities/Features 00:17:04.935 ================================ 00:17:04.935 Vendor ID: 8086 00:17:04.935 Subsystem Vendor ID: 8086 00:17:04.935 Serial Number: SPDK00000000000001 00:17:04.935 Model Number: SPDK bdev Controller 00:17:04.935 Firmware Version: 24.05 00:17:04.935 Recommended Arb Burst: 6 00:17:04.935 IEEE OUI Identifier: e4 d2 5c 00:17:04.935 Multi-path I/O 00:17:04.935 May have multiple subsystem ports: Yes 00:17:04.935 May have multiple controllers: Yes 00:17:04.935 Associated with SR-IOV VF: No 00:17:04.935 Max Data Transfer Size: 131072 00:17:04.935 Max Number of Namespaces: 32 00:17:04.935 Max Number of I/O Queues: 127 00:17:04.935 NVMe Specification Version (VS): 1.3 00:17:04.935 NVMe Specification Version (Identify): 1.3 00:17:04.935 Maximum Queue Entries: 128 00:17:04.935 Contiguous Queues Required: Yes 00:17:04.935 Arbitration Mechanisms Supported 00:17:04.935 Weighted Round Robin: Not Supported 00:17:04.935 Vendor Specific: Not Supported 00:17:04.935 Reset Timeout: 15000 ms 00:17:04.935 Doorbell Stride: 4 bytes 00:17:04.935 NVM Subsystem Reset: Not Supported 00:17:04.935 Command Sets Supported 00:17:04.935 NVM Command Set: Supported 00:17:04.935 Boot Partition: Not Supported 00:17:04.935 Memory Page Size Minimum: 4096 bytes 00:17:04.935 Memory Page Size Maximum: 4096 bytes 00:17:04.935 Persistent Memory Region: Not Supported 00:17:04.935 Optional Asynchronous Events Supported 00:17:04.935 Namespace Attribute Notices: Supported 00:17:04.935 Firmware Activation Notices: Not Supported 00:17:04.935 ANA Change Notices: Not Supported 00:17:04.935 PLE Aggregate Log Change Notices: Not Supported 00:17:04.935 LBA Status Info Alert Notices: Not Supported 00:17:04.935 EGE Aggregate Log Change Notices: Not Supported 00:17:04.935 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.935 Zone Descriptor Change Notices: Not Supported 00:17:04.935 Discovery Log Change Notices: Not Supported 00:17:04.935 Controller Attributes 00:17:04.935 128-bit Host Identifier: Supported 00:17:04.935 Non-Operational Permissive Mode: Not Supported 00:17:04.935 NVM Sets: Not Supported 00:17:04.935 Read Recovery Levels: Not Supported 00:17:04.935 Endurance Groups: Not Supported 00:17:04.935 Predictable Latency Mode: Not Supported 00:17:04.935 Traffic Based Keep ALive: Not Supported 00:17:04.935 Namespace Granularity: Not Supported 00:17:04.935 SQ Associations: Not Supported 00:17:04.935 UUID List: Not Supported 00:17:04.935 Multi-Domain Subsystem: Not Supported 00:17:04.935 Fixed Capacity Management: Not Supported 00:17:04.935 Variable Capacity Management: Not Supported 00:17:04.935 Delete Endurance Group: Not Supported 00:17:04.935 Delete NVM Set: Not Supported 00:17:04.935 Extended LBA Formats Supported: Not Supported 00:17:04.935 Flexible Data Placement Supported: Not Supported 00:17:04.935 00:17:04.935 Controller Memory Buffer Support 00:17:04.935 ================================ 00:17:04.935 Supported: No 00:17:04.935 00:17:04.935 Persistent Memory Region Support 00:17:04.935 ================================ 00:17:04.935 Supported: No 00:17:04.935 00:17:04.935 Admin Command Set Attributes 00:17:04.935 ============================ 00:17:04.935 Security Send/Receive: Not Supported 00:17:04.935 Format NVM: Not Supported 00:17:04.935 Firmware Activate/Download: Not Supported 00:17:04.935 Namespace Management: Not Supported 00:17:04.935 Device Self-Test: Not Supported 00:17:04.935 Directives: Not Supported 00:17:04.935 NVMe-MI: Not Supported 00:17:04.935 Virtualization Management: Not Supported 00:17:04.935 Doorbell Buffer Config: Not Supported 00:17:04.935 Get LBA Status Capability: Not Supported 00:17:04.935 Command & Feature Lockdown Capability: Not Supported 00:17:04.935 Abort Command Limit: 4 00:17:04.935 Async Event Request Limit: 4 00:17:04.935 Number of Firmware Slots: N/A 00:17:04.935 Firmware Slot 1 Read-Only: N/A 00:17:04.935 Firmware Activation Without Reset: N/A 00:17:04.935 Multiple Update Detection Support: N/A 00:17:04.935 Firmware Update Granularity: No Information Provided 00:17:04.935 Per-Namespace SMART Log: No 00:17:04.935 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.935 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:04.935 Command Effects Log Page: Supported 00:17:04.935 Get Log Page Extended Data: Supported 00:17:04.935 Telemetry Log Pages: Not Supported 00:17:04.935 Persistent Event Log Pages: Not Supported 00:17:04.935 Supported Log Pages Log Page: May Support 00:17:04.935 Commands Supported & Effects Log Page: Not Supported 00:17:04.935 Feature Identifiers & Effects Log Page:May Support 00:17:04.935 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.935 Data Area 4 for Telemetry Log: Not Supported 00:17:04.935 Error Log Page Entries Supported: 128 00:17:04.935 Keep Alive: Supported 00:17:04.935 Keep Alive Granularity: 10000 ms 00:17:04.935 00:17:04.935 NVM Command Set Attributes 00:17:04.935 ========================== 00:17:04.935 Submission Queue Entry Size 00:17:04.935 Max: 64 00:17:04.935 Min: 64 00:17:04.935 Completion Queue Entry Size 00:17:04.935 Max: 16 00:17:04.935 Min: 16 00:17:04.935 Number of Namespaces: 32 00:17:04.935 Compare Command: Supported 00:17:04.935 Write Uncorrectable Command: Not Supported 00:17:04.935 Dataset Management Command: Supported 00:17:04.935 Write Zeroes Command: Supported 00:17:04.935 Set Features Save Field: Not Supported 00:17:04.935 Reservations: Supported 00:17:04.935 Timestamp: Not Supported 00:17:04.935 Copy: Supported 00:17:04.935 Volatile Write Cache: Present 00:17:04.935 Atomic Write Unit (Normal): 1 00:17:04.935 Atomic Write Unit (PFail): 1 00:17:04.935 Atomic Compare & Write Unit: 1 00:17:04.935 Fused Compare & Write: Supported 00:17:04.935 Scatter-Gather List 00:17:04.935 SGL Command Set: Supported 00:17:04.935 SGL Keyed: Supported 00:17:04.935 SGL Bit Bucket Descriptor: Not Supported 00:17:04.935 SGL Metadata Pointer: Not Supported 00:17:04.935 Oversized SGL: Not Supported 00:17:04.935 SGL Metadata Address: Not Supported 00:17:04.935 SGL Offset: Supported 00:17:04.935 Transport SGL Data Block: Not Supported 00:17:04.935 Replay Protected Memory Block: Not Supported 00:17:04.935 00:17:04.935 Firmware Slot Information 00:17:04.935 ========================= 00:17:04.935 Active slot: 1 00:17:04.935 Slot 1 Firmware Revision: 24.05 00:17:04.935 00:17:04.935 00:17:04.935 Commands Supported and Effects 00:17:04.935 ============================== 00:17:04.935 Admin Commands 00:17:04.935 -------------- 00:17:04.935 Get Log Page (02h): Supported 00:17:04.935 Identify (06h): Supported 00:17:04.935 Abort (08h): Supported 00:17:04.935 Set Features (09h): Supported 00:17:04.935 Get Features (0Ah): Supported 00:17:04.935 Asynchronous Event Request (0Ch): Supported 00:17:04.935 Keep Alive (18h): Supported 00:17:04.935 I/O Commands 00:17:04.935 ------------ 00:17:04.935 Flush (00h): Supported LBA-Change 00:17:04.935 Write (01h): Supported LBA-Change 00:17:04.935 Read (02h): Supported 00:17:04.935 Compare (05h): Supported 00:17:04.935 Write Zeroes (08h): Supported LBA-Change 00:17:04.935 Dataset Management (09h): Supported LBA-Change 00:17:04.935 Copy (19h): Supported LBA-Change 00:17:04.935 Unknown (79h): Supported LBA-Change 00:17:04.935 Unknown (7Ah): Supported 00:17:04.935 00:17:04.935 Error Log 00:17:04.935 ========= 00:17:04.935 00:17:04.935 Arbitration 00:17:04.935 =========== 00:17:04.935 Arbitration Burst: 1 00:17:04.935 00:17:04.936 Power Management 00:17:04.936 ================ 00:17:04.936 Number of Power States: 1 00:17:04.936 Current Power State: Power State #0 00:17:04.936 Power State #0: 00:17:04.936 Max Power: 0.00 W 00:17:04.936 Non-Operational State: Operational 00:17:04.936 Entry Latency: Not Reported 00:17:04.936 Exit Latency: Not Reported 00:17:04.936 Relative Read Throughput: 0 00:17:04.936 Relative Read Latency: 0 00:17:04.936 Relative Write Throughput: 0 00:17:04.936 Relative Write Latency: 0 00:17:04.936 Idle Power: Not Reported 00:17:04.936 Active Power: Not Reported 00:17:04.936 Non-Operational Permissive Mode: Not Supported 00:17:04.936 00:17:04.936 Health Information 00:17:04.936 ================== 00:17:04.936 Critical Warnings: 00:17:04.936 Available Spare Space: OK 00:17:04.936 Temperature: OK 00:17:04.936 Device Reliability: OK 00:17:04.936 Read Only: No 00:17:04.936 Volatile Memory Backup: OK 00:17:04.936 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:04.936 Temperature Threshold: [2024-05-15 08:57:21.058412] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.058422] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7e0280) 00:17:04.936 [2024-05-15 08:57:21.058434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.936 [2024-05-15 08:57:21.058474] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8292f0, cid 7, qid 0 00:17:04.936 [2024-05-15 08:57:21.058545] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.936 [2024-05-15 08:57:21.058554] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.936 [2024-05-15 08:57:21.058558] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.058585] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8292f0) on tqpair=0x7e0280 00:17:04.936 [2024-05-15 08:57:21.058643] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:04.936 [2024-05-15 08:57:21.058671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.936 [2024-05-15 08:57:21.058680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.936 [2024-05-15 08:57:21.058687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.936 [2024-05-15 08:57:21.058694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.936 [2024-05-15 08:57:21.058707] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.058716] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.058723] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.936 [2024-05-15 08:57:21.058734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.936 [2024-05-15 08:57:21.058768] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.936 [2024-05-15 08:57:21.058829] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.936 [2024-05-15 08:57:21.058842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.936 [2024-05-15 08:57:21.058849] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.058857] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.936 [2024-05-15 08:57:21.058868] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.058873] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.058877] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.936 [2024-05-15 08:57:21.058885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.936 [2024-05-15 08:57:21.058917] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.936 [2024-05-15 08:57:21.058992] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.936 [2024-05-15 08:57:21.059002] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.936 [2024-05-15 08:57:21.059006] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059012] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.936 [2024-05-15 08:57:21.059021] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:04.936 [2024-05-15 08:57:21.059028] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:04.936 [2024-05-15 08:57:21.059044] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059053] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059061] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.936 [2024-05-15 08:57:21.059072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.936 [2024-05-15 08:57:21.059098] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.936 [2024-05-15 08:57:21.059152] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.936 [2024-05-15 08:57:21.059162] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.936 [2024-05-15 08:57:21.059166] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059170] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.936 [2024-05-15 08:57:21.059184] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059193] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059199] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.936 [2024-05-15 08:57:21.059212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.936 [2024-05-15 08:57:21.059244] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.936 [2024-05-15 08:57:21.059304] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.936 [2024-05-15 08:57:21.059318] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.936 [2024-05-15 08:57:21.059325] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059333] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.936 [2024-05-15 08:57:21.059350] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059356] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059360] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.936 [2024-05-15 08:57:21.059371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.936 [2024-05-15 08:57:21.059405] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.936 [2024-05-15 08:57:21.059453] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.936 [2024-05-15 08:57:21.059462] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.936 [2024-05-15 08:57:21.059467] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059471] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.936 [2024-05-15 08:57:21.059486] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059494] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.936 [2024-05-15 08:57:21.059498] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.936 [2024-05-15 08:57:21.059506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.936 [2024-05-15 08:57:21.059537] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.936 [2024-05-15 08:57:21.059605] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.059622] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.059627] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.059632] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.059649] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.059657] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.059662] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.059675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.059711] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.059768] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.059777] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.059783] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.059790] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.059805] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.059813] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.059820] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.059832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.059866] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.059925] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.059934] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.059938] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.059943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.059962] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.059972] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.059979] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.059992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.060021] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.060077] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.060090] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.060097] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060104] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.060135] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060145] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060152] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.060161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.060186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.060257] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.060277] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.060284] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060291] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.060311] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060329] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.060341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.060367] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.060426] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.060440] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.060448] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060455] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.060474] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060483] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060488] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.060496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.060520] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.060596] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.060612] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.060619] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060626] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.060640] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060646] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060650] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.060660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.060692] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.060751] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.060761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.060765] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060769] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.060785] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060794] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060801] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.060814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.060843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.060902] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.060911] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.060917] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060924] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.060943] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060951] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.060956] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.060964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.060991] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.061045] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.061058] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.061063] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.061067] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.061080] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.061087] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.061094] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.061104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.061132] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.061190] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.061200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.061204] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.061208] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.061226] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.061235] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.061242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.061251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.061278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.061335] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.061348] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.937 [2024-05-15 08:57:21.061352] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.061357] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.937 [2024-05-15 08:57:21.061370] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.061378] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.937 [2024-05-15 08:57:21.061385] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.937 [2024-05-15 08:57:21.061397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.937 [2024-05-15 08:57:21.061426] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.937 [2024-05-15 08:57:21.061481] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.937 [2024-05-15 08:57:21.061496] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.938 [2024-05-15 08:57:21.061504] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.938 [2024-05-15 08:57:21.061512] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.938 [2024-05-15 08:57:21.061529] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.938 [2024-05-15 08:57:21.061537] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.938 [2024-05-15 08:57:21.061541] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e0280) 00:17:04.938 [2024-05-15 08:57:21.061551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.938 [2024-05-15 08:57:21.065603] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x828d70, cid 3, qid 0 00:17:04.938 [2024-05-15 08:57:21.065700] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.938 [2024-05-15 08:57:21.065715] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.938 [2024-05-15 08:57:21.065721] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.938 [2024-05-15 08:57:21.065725] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x828d70) on tqpair=0x7e0280 00:17:04.938 [2024-05-15 08:57:21.065736] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:17:04.938 0 Kelvin (-273 Celsius) 00:17:04.938 Available Spare: 0% 00:17:04.938 Available Spare Threshold: 0% 00:17:04.938 Life Percentage Used: 0% 00:17:04.938 Data Units Read: 0 00:17:04.938 Data Units Written: 0 00:17:04.938 Host Read Commands: 0 00:17:04.938 Host Write Commands: 0 00:17:04.938 Controller Busy Time: 0 minutes 00:17:04.938 Power Cycles: 0 00:17:04.938 Power On Hours: 0 hours 00:17:04.938 Unsafe Shutdowns: 0 00:17:04.938 Unrecoverable Media Errors: 0 00:17:04.938 Lifetime Error Log Entries: 0 00:17:04.938 Warning Temperature Time: 0 minutes 00:17:04.938 Critical Temperature Time: 0 minutes 00:17:04.938 00:17:04.938 Number of Queues 00:17:04.938 ================ 00:17:04.938 Number of I/O Submission Queues: 127 00:17:04.938 Number of I/O Completion Queues: 127 00:17:04.938 00:17:04.938 Active Namespaces 00:17:04.938 ================= 00:17:04.938 Namespace ID:1 00:17:04.938 Error Recovery Timeout: Unlimited 00:17:04.938 Command Set Identifier: NVM (00h) 00:17:04.938 Deallocate: Supported 00:17:04.938 Deallocated/Unwritten Error: Not Supported 00:17:04.938 Deallocated Read Value: Unknown 00:17:04.938 Deallocate in Write Zeroes: Not Supported 00:17:04.938 Deallocated Guard Field: 0xFFFF 00:17:04.938 Flush: Supported 00:17:04.938 Reservation: Supported 00:17:04.938 Namespace Sharing Capabilities: Multiple Controllers 00:17:04.938 Size (in LBAs): 131072 (0GiB) 00:17:04.938 Capacity (in LBAs): 131072 (0GiB) 00:17:04.938 Utilization (in LBAs): 131072 (0GiB) 00:17:04.938 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:04.938 EUI64: ABCDEF0123456789 00:17:04.938 UUID: bab776e7-a2a0-4c29-aac9-8bfffab981be 00:17:04.938 Thin Provisioning: Not Supported 00:17:04.938 Per-NS Atomic Units: Yes 00:17:04.938 Atomic Boundary Size (Normal): 0 00:17:04.938 Atomic Boundary Size (PFail): 0 00:17:04.938 Atomic Boundary Offset: 0 00:17:04.938 Maximum Single Source Range Length: 65535 00:17:04.938 Maximum Copy Length: 65535 00:17:04.938 Maximum Source Range Count: 1 00:17:04.938 NGUID/EUI64 Never Reused: No 00:17:04.938 Namespace Write Protected: No 00:17:04.938 Number of LBA Formats: 1 00:17:04.938 Current LBA Format: LBA Format #00 00:17:04.938 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:04.938 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.938 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:04.938 rmmod nvme_tcp 00:17:04.938 rmmod nvme_fabrics 00:17:04.938 rmmod nvme_keyring 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 80551 ']' 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 80551 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 80551 ']' 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 80551 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80551 00:17:05.197 killing process with pid 80551 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80551' 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 80551 00:17:05.197 [2024-05-15 08:57:21.202015] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 80551 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.197 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.198 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.198 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.198 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.457 08:57:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:05.457 00:17:05.457 real 0m2.562s 00:17:05.457 user 0m7.159s 00:17:05.457 sys 0m0.604s 00:17:05.457 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:05.457 08:57:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:05.457 ************************************ 00:17:05.457 END TEST nvmf_identify 00:17:05.457 ************************************ 00:17:05.457 08:57:21 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:05.457 08:57:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:05.457 08:57:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:05.457 08:57:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:05.457 ************************************ 00:17:05.457 START TEST nvmf_perf 00:17:05.457 ************************************ 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:05.457 * Looking for test storage... 00:17:05.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.457 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:05.458 Cannot find device "nvmf_tgt_br" 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.458 Cannot find device "nvmf_tgt_br2" 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:05.458 Cannot find device "nvmf_tgt_br" 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:05.458 Cannot find device "nvmf_tgt_br2" 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:05.458 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:05.716 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:05.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:17:05.717 00:17:05.717 --- 10.0.0.2 ping statistics --- 00:17:05.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.717 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:05.717 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:05.717 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:17:05.717 00:17:05.717 --- 10.0.0.3 ping statistics --- 00:17:05.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.717 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:05.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:17:05.717 00:17:05.717 --- 10.0.0.1 ping statistics --- 00:17:05.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.717 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=80782 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 80782 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 80782 ']' 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:05.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:05.717 08:57:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:05.975 [2024-05-15 08:57:21.969660] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:05.975 [2024-05-15 08:57:21.969793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.975 [2024-05-15 08:57:22.110160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.975 [2024-05-15 08:57:22.170344] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.975 [2024-05-15 08:57:22.170400] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.975 [2024-05-15 08:57:22.170412] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.975 [2024-05-15 08:57:22.170420] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.975 [2024-05-15 08:57:22.170427] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.975 [2024-05-15 08:57:22.170526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.975 [2024-05-15 08:57:22.170964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.975 [2024-05-15 08:57:22.171126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.975 [2024-05-15 08:57:22.171118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.924 08:57:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:06.924 08:57:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:17:06.924 08:57:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.924 08:57:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.924 08:57:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:06.924 08:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.924 08:57:23 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:06.924 08:57:23 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:07.181 08:57:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:07.181 08:57:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:07.768 08:57:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:07.768 08:57:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.025 08:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:08.025 08:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:08.026 08:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:08.026 08:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:08.026 08:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:08.283 [2024-05-15 08:57:24.295605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.283 08:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:08.540 08:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:08.540 08:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:08.798 08:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:08.798 08:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:09.055 08:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.313 [2024-05-15 08:57:25.484925] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:09.313 [2024-05-15 08:57:25.485766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.313 08:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:09.877 08:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:09.877 08:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:09.877 08:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:09.877 08:57:25 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:10.809 Initializing NVMe Controllers 00:17:10.809 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:10.809 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:10.809 Initialization complete. Launching workers. 00:17:10.809 ======================================================== 00:17:10.809 Latency(us) 00:17:10.810 Device Information : IOPS MiB/s Average min max 00:17:10.810 PCIE (0000:00:10.0) NSID 1 from core 0: 25532.63 99.74 1253.02 305.64 7907.70 00:17:10.810 ======================================================== 00:17:10.810 Total : 25532.63 99.74 1253.02 305.64 7907.70 00:17:10.810 00:17:10.810 08:57:26 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:12.192 Initializing NVMe Controllers 00:17:12.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:12.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:12.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:12.192 Initialization complete. Launching workers. 00:17:12.192 ======================================================== 00:17:12.192 Latency(us) 00:17:12.192 Device Information : IOPS MiB/s Average min max 00:17:12.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2697.33 10.54 368.84 119.14 4364.95 00:17:12.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.52 0.48 8161.82 7868.74 12030.47 00:17:12.192 ======================================================== 00:17:12.192 Total : 2819.85 11.02 707.43 119.14 12030.47 00:17:12.192 00:17:12.192 08:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:13.567 Initializing NVMe Controllers 00:17:13.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:13.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:13.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:13.567 Initialization complete. Launching workers. 00:17:13.567 ======================================================== 00:17:13.567 Latency(us) 00:17:13.567 Device Information : IOPS MiB/s Average min max 00:17:13.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7231.83 28.25 4429.23 519.18 8875.51 00:17:13.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2672.57 10.44 12087.10 6416.39 23517.12 00:17:13.567 ======================================================== 00:17:13.567 Total : 9904.40 38.69 6495.60 519.18 23517.12 00:17:13.567 00:17:13.567 08:57:29 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:13.567 08:57:29 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:16.094 Initializing NVMe Controllers 00:17:16.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:16.094 Controller IO queue size 128, less than required. 00:17:16.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.094 Controller IO queue size 128, less than required. 00:17:16.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:16.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:16.094 Initialization complete. Launching workers. 00:17:16.094 ======================================================== 00:17:16.094 Latency(us) 00:17:16.094 Device Information : IOPS MiB/s Average min max 00:17:16.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1143.99 286.00 113172.41 65320.88 239230.92 00:17:16.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 547.00 136.75 266071.81 122290.77 537183.82 00:17:16.094 ======================================================== 00:17:16.094 Total : 1690.99 422.75 162631.89 65320.88 537183.82 00:17:16.094 00:17:16.352 08:57:32 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:16.352 Initializing NVMe Controllers 00:17:16.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:16.352 Controller IO queue size 128, less than required. 00:17:16.352 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.352 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:16.352 Controller IO queue size 128, less than required. 00:17:16.352 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.352 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:16.352 WARNING: Some requested NVMe devices were skipped 00:17:16.352 No valid NVMe controllers or AIO or URING devices found 00:17:16.352 08:57:32 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:18.882 Initializing NVMe Controllers 00:17:18.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:18.882 Controller IO queue size 128, less than required. 00:17:18.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:18.882 Controller IO queue size 128, less than required. 00:17:18.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:18.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:18.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:18.882 Initialization complete. Launching workers. 00:17:18.882 00:17:18.882 ==================== 00:17:18.882 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:18.882 TCP transport: 00:17:18.882 polls: 7184 00:17:18.882 idle_polls: 4188 00:17:18.882 sock_completions: 2996 00:17:18.882 nvme_completions: 6059 00:17:18.882 submitted_requests: 9088 00:17:18.882 queued_requests: 1 00:17:18.883 00:17:18.883 ==================== 00:17:18.883 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:18.883 TCP transport: 00:17:18.883 polls: 7329 00:17:18.883 idle_polls: 4413 00:17:18.883 sock_completions: 2916 00:17:18.883 nvme_completions: 5753 00:17:18.883 submitted_requests: 8586 00:17:18.883 queued_requests: 1 00:17:18.883 ======================================================== 00:17:18.883 Latency(us) 00:17:18.883 Device Information : IOPS MiB/s Average min max 00:17:18.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1514.38 378.59 86294.83 57902.42 128853.44 00:17:18.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1437.89 359.47 89890.76 34944.00 146565.85 00:17:18.883 ======================================================== 00:17:18.883 Total : 2952.27 738.07 88046.21 34944.00 146565.85 00:17:18.883 00:17:18.883 08:57:35 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:18.883 08:57:35 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.141 08:57:35 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:19.141 08:57:35 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:19.141 08:57:35 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:19.141 08:57:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.141 08:57:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:17:19.141 08:57:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.141 08:57:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:17:19.141 08:57:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.141 08:57:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.141 rmmod nvme_tcp 00:17:19.399 rmmod nvme_fabrics 00:17:19.399 rmmod nvme_keyring 00:17:19.399 08:57:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 80782 ']' 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 80782 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 80782 ']' 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 80782 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80782 00:17:19.400 killing process with pid 80782 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80782' 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 80782 00:17:19.400 [2024-05-15 08:57:35.440546] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:19.400 08:57:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 80782 00:17:20.335 08:57:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.335 08:57:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.335 08:57:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.335 08:57:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.335 08:57:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.335 08:57:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.335 08:57:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.335 08:57:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.336 08:57:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:20.336 00:17:20.336 real 0m14.800s 00:17:20.336 user 0m55.199s 00:17:20.336 sys 0m3.533s 00:17:20.336 08:57:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:20.336 08:57:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:20.336 ************************************ 00:17:20.336 END TEST nvmf_perf 00:17:20.336 ************************************ 00:17:20.336 08:57:36 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:20.336 08:57:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:20.336 08:57:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:20.336 08:57:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:20.336 ************************************ 00:17:20.336 START TEST nvmf_fio_host 00:17:20.336 ************************************ 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:20.336 * Looking for test storage... 00:17:20.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:20.336 Cannot find device "nvmf_tgt_br" 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.336 Cannot find device "nvmf_tgt_br2" 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:20.336 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:20.336 Cannot find device "nvmf_tgt_br" 00:17:20.337 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:17:20.337 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:20.337 Cannot find device "nvmf_tgt_br2" 00:17:20.337 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:17:20.337 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:20.337 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:20.337 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.337 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:20.337 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:20.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:17:20.595 00:17:20.595 --- 10.0.0.2 ping statistics --- 00:17:20.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.595 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:20.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:20.595 00:17:20.595 --- 10.0.0.3 ping statistics --- 00:17:20.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.595 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:17:20.595 00:17:20.595 --- 10.0.0.1 ping statistics --- 00:17:20.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.595 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=81264 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 81264 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 81264 ']' 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:20.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:20.595 08:57:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.853 [2024-05-15 08:57:36.843141] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:20.853 [2024-05-15 08:57:36.843257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.853 [2024-05-15 08:57:36.978728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.853 [2024-05-15 08:57:37.038952] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.853 [2024-05-15 08:57:37.039006] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.853 [2024-05-15 08:57:37.039018] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.853 [2024-05-15 08:57:37.039027] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.853 [2024-05-15 08:57:37.039034] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.853 [2024-05-15 08:57:37.039159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.853 [2024-05-15 08:57:37.039312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.853 [2024-05-15 08:57:37.039760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.853 [2024-05-15 08:57:37.039774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.129 [2024-05-15 08:57:37.147302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.129 Malloc1 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.129 [2024-05-15 08:57:37.245620] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:21.129 [2024-05-15 08:57:37.245924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:21.129 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:21.130 08:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:21.408 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:21.408 fio-3.35 00:17:21.408 Starting 1 thread 00:17:23.939 00:17:23.939 test: (groupid=0, jobs=1): err= 0: pid=81329: Wed May 15 08:57:39 2024 00:17:23.939 read: IOPS=8366, BW=32.7MiB/s (34.3MB/s)(65.6MiB/2006msec) 00:17:23.939 slat (usec): min=2, max=318, avg= 2.98, stdev= 3.39 00:17:23.939 clat (usec): min=3196, max=56615, avg=8061.85, stdev=3353.18 00:17:23.939 lat (usec): min=3233, max=56618, avg=8064.83, stdev=3353.54 00:17:23.939 clat percentiles (usec): 00:17:23.939 | 1.00th=[ 6063], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:17:23.939 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7504], 60.00th=[ 7635], 00:17:23.939 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 9372], 00:17:23.939 | 99.00th=[20317], 99.50th=[38536], 99.90th=[49546], 99.95th=[52167], 00:17:23.939 | 99.99th=[56361] 00:17:23.939 bw ( KiB/s): min=27768, max=35744, per=99.84%, avg=33412.00, stdev=3776.38, samples=4 00:17:23.939 iops : min= 6942, max= 8936, avg=8353.00, stdev=944.09, samples=4 00:17:23.939 write: IOPS=8362, BW=32.7MiB/s (34.3MB/s)(65.5MiB/2006msec); 0 zone resets 00:17:23.939 slat (usec): min=2, max=258, avg= 3.09, stdev= 2.58 00:17:23.939 clat (usec): min=2408, max=51875, avg=7186.45, stdev=2817.44 00:17:23.939 lat (usec): min=2422, max=51878, avg=7189.54, stdev=2817.63 00:17:23.939 clat percentiles (usec): 00:17:23.939 | 1.00th=[ 5145], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6521], 00:17:23.939 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:17:23.939 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7504], 95.00th=[ 8094], 00:17:23.939 | 99.00th=[14222], 99.50th=[34341], 99.90th=[44303], 99.95th=[45351], 00:17:23.939 | 99.99th=[51643] 00:17:23.939 bw ( KiB/s): min=27392, max=35768, per=100.00%, avg=33452.00, stdev=4048.58, samples=4 00:17:23.939 iops : min= 6848, max= 8942, avg=8363.00, stdev=1012.15, samples=4 00:17:23.939 lat (msec) : 4=0.09%, 10=96.19%, 20=2.81%, 50=0.86%, 100=0.04% 00:17:23.939 cpu : usr=66.13%, sys=23.99%, ctx=8, majf=0, minf=5 00:17:23.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:23.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.940 issued rwts: total=16783,16776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.940 00:17:23.940 Run status group 0 (all jobs): 00:17:23.940 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=65.6MiB (68.7MB), run=2006-2006msec 00:17:23.940 WRITE: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=65.5MiB (68.7MB), run=2006-2006msec 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:23.940 08:57:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:23.940 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:23.940 fio-3.35 00:17:23.940 Starting 1 thread 00:17:26.475 00:17:26.475 test: (groupid=0, jobs=1): err= 0: pid=81372: Wed May 15 08:57:42 2024 00:17:26.475 read: IOPS=7894, BW=123MiB/s (129MB/s)(248MiB/2008msec) 00:17:26.475 slat (usec): min=3, max=122, avg= 3.95, stdev= 1.80 00:17:26.475 clat (usec): min=2844, max=17924, avg=9508.98, stdev=2183.16 00:17:26.475 lat (usec): min=2848, max=17927, avg=9512.93, stdev=2183.18 00:17:26.475 clat percentiles (usec): 00:17:26.475 | 1.00th=[ 5211], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7504], 00:17:26.475 | 30.00th=[ 8160], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10159], 00:17:26.475 | 70.00th=[10814], 80.00th=[11469], 90.00th=[11994], 95.00th=[12780], 00:17:26.475 | 99.00th=[15270], 99.50th=[16057], 99.90th=[17695], 99.95th=[17695], 00:17:26.475 | 99.99th=[17957] 00:17:26.475 bw ( KiB/s): min=52768, max=74144, per=51.71%, avg=65312.00, stdev=9027.29, samples=4 00:17:26.475 iops : min= 3298, max= 4634, avg=4082.00, stdev=564.21, samples=4 00:17:26.475 write: IOPS=4838, BW=75.6MiB/s (79.3MB/s)(134MiB/1772msec); 0 zone resets 00:17:26.475 slat (usec): min=37, max=914, avg=39.72, stdev=11.37 00:17:26.475 clat (usec): min=4860, max=19957, avg=11527.05, stdev=2009.64 00:17:26.475 lat (usec): min=4898, max=19996, avg=11566.78, stdev=2009.56 00:17:26.475 clat percentiles (usec): 00:17:26.475 | 1.00th=[ 7701], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9765], 00:17:26.475 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:17:26.475 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14353], 95.00th=[15270], 00:17:26.475 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18744], 99.95th=[19006], 00:17:26.475 | 99.99th=[20055] 00:17:26.475 bw ( KiB/s): min=54080, max=77824, per=88.01%, avg=68128.00, stdev=10070.94, samples=4 00:17:26.475 iops : min= 3380, max= 4864, avg=4258.00, stdev=629.43, samples=4 00:17:26.475 lat (msec) : 4=0.09%, 10=45.54%, 20=54.37% 00:17:26.475 cpu : usr=74.64%, sys=16.39%, ctx=6, majf=0, minf=20 00:17:26.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:26.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:26.475 issued rwts: total=15852,8573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:26.475 00:17:26.475 Run status group 0 (all jobs): 00:17:26.475 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=248MiB (260MB), run=2008-2008msec 00:17:26.475 WRITE: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=134MiB (140MB), run=1772-1772msec 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.475 rmmod nvme_tcp 00:17:26.475 rmmod nvme_fabrics 00:17:26.475 rmmod nvme_keyring 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 81264 ']' 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 81264 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 81264 ']' 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 81264 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81264 00:17:26.475 killing process with pid 81264 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81264' 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 81264 00:17:26.475 [2024-05-15 08:57:42.349037] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 81264 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:26.475 00:17:26.475 real 0m6.265s 00:17:26.475 user 0m24.444s 00:17:26.475 sys 0m1.857s 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:26.475 08:57:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.475 ************************************ 00:17:26.475 END TEST nvmf_fio_host 00:17:26.475 ************************************ 00:17:26.476 08:57:42 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:26.476 08:57:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:26.476 08:57:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:26.476 08:57:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:26.476 ************************************ 00:17:26.476 START TEST nvmf_failover 00:17:26.476 ************************************ 00:17:26.476 08:57:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:26.734 * Looking for test storage... 00:17:26.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.734 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:26.735 Cannot find device "nvmf_tgt_br" 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:26.735 Cannot find device "nvmf_tgt_br2" 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:26.735 Cannot find device "nvmf_tgt_br" 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:26.735 Cannot find device "nvmf_tgt_br2" 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:26.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:26.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:26.735 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:26.993 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:26.993 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:26.993 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:26.993 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:26.993 08:57:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:26.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:17:26.993 00:17:26.993 --- 10.0.0.2 ping statistics --- 00:17:26.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.993 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:26.993 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:26.993 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:26.993 00:17:26.993 --- 10.0.0.3 ping statistics --- 00:17:26.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.993 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:26.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:26.993 00:17:26.993 --- 10.0.0.1 ping statistics --- 00:17:26.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.993 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:26.993 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:26.994 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=81578 00:17:26.994 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:26.994 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 81578 00:17:26.994 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 81578 ']' 00:17:26.994 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.994 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:26.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.994 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.994 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:26.994 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:26.994 [2024-05-15 08:57:43.116752] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:26.994 [2024-05-15 08:57:43.116830] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.252 [2024-05-15 08:57:43.246491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:27.252 [2024-05-15 08:57:43.304791] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.252 [2024-05-15 08:57:43.304842] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.252 [2024-05-15 08:57:43.304854] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.252 [2024-05-15 08:57:43.304862] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.252 [2024-05-15 08:57:43.304869] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.252 [2024-05-15 08:57:43.305892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.252 [2024-05-15 08:57:43.306069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.252 [2024-05-15 08:57:43.306074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.252 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:27.252 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:17:27.252 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:27.252 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.252 08:57:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:27.252 08:57:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.252 08:57:43 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:27.510 [2024-05-15 08:57:43.681711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.510 08:57:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:27.768 Malloc0 00:17:27.768 08:57:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:28.026 08:57:44 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.593 08:57:44 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.851 [2024-05-15 08:57:44.854935] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:28.851 [2024-05-15 08:57:44.855924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.851 08:57:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:29.110 [2024-05-15 08:57:45.139363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:29.110 08:57:45 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:29.368 [2024-05-15 08:57:45.491721] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:29.368 08:57:45 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:29.368 08:57:45 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81684 00:17:29.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.368 08:57:45 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:29.368 08:57:45 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81684 /var/tmp/bdevperf.sock 00:17:29.368 08:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 81684 ']' 00:17:29.368 08:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.368 08:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:29.368 08:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.368 08:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:29.368 08:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:29.934 08:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:29.934 08:57:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:17:29.934 08:57:45 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:29.934 NVMe0n1 00:17:30.192 08:57:46 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:30.451 00:17:30.451 08:57:46 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81717 00:17:30.451 08:57:46 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:30.451 08:57:46 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:31.391 08:57:47 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.957 [2024-05-15 08:57:47.910904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.911669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.911840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.911981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.912136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.912271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.912409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.912523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.912684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.912818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.912967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.913095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.913230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.913357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.913486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.913636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.913776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.913903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.914016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.914141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.914267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.914394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.914538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.914719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.914847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.914973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.915101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.915216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.915354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.915480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.915613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.915748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.915885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.916012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.916167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.916284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.916409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.916541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.916692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.916809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.916937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.917064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.917208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.917335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.917447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.917591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 [2024-05-15 08:57:47.917729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1310 is same with the state(5) to be set 00:17:31.957 08:57:47 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:35.251 08:57:50 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:35.251 00:17:35.251 08:57:51 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:35.521 [2024-05-15 08:57:51.624291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.521 [2024-05-15 08:57:51.624354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.521 [2024-05-15 08:57:51.624365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.521 [2024-05-15 08:57:51.624374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 [2024-05-15 08:57:51.624696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1eb0 is same with the state(5) to be set 00:17:35.522 08:57:51 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:38.814 08:57:54 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.814 [2024-05-15 08:57:54.989771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.814 08:57:55 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:40.188 08:57:56 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:40.188 [2024-05-15 08:57:56.326116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.188 [2024-05-15 08:57:56.326337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 [2024-05-15 08:57:56.326493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09840 is same with the state(5) to be set 00:17:40.189 08:57:56 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 81717 00:17:45.448 0 00:17:45.448 08:58:01 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 81684 00:17:45.448 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 81684 ']' 00:17:45.448 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 81684 00:17:45.448 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:17:45.448 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:45.713 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81684 00:17:45.713 killing process with pid 81684 00:17:45.713 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:45.713 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:45.713 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81684' 00:17:45.713 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 81684 00:17:45.713 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 81684 00:17:45.713 08:58:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:45.713 [2024-05-15 08:57:45.566299] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:45.713 [2024-05-15 08:57:45.566415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81684 ] 00:17:45.713 [2024-05-15 08:57:45.701141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.713 [2024-05-15 08:57:45.775551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.713 Running I/O for 15 seconds... 00:17:45.713 [2024-05-15 08:57:47.918123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.713 [2024-05-15 08:57:47.918180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.713 [2024-05-15 08:57:47.918215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.713 [2024-05-15 08:57:47.918232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.713 [2024-05-15 08:57:47.918249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.713 [2024-05-15 08:57:47.918263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.713 [2024-05-15 08:57:47.918279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.713 [2024-05-15 08:57:47.918293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.713 [2024-05-15 08:57:47.918309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.713 [2024-05-15 08:57:47.918323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.713 [2024-05-15 08:57:47.918339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.713 [2024-05-15 08:57:47.918353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.713 [2024-05-15 08:57:47.918369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.713 [2024-05-15 08:57:47.918383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.713 [2024-05-15 08:57:47.918399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.713 [2024-05-15 08:57:47.918412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.713 [2024-05-15 08:57:47.918429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.713 [2024-05-15 08:57:47.918443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.713 [2024-05-15 08:57:47.918458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.713 [2024-05-15 08:57:47.918472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.918971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.918985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.714 [2024-05-15 08:57:47.919692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.714 [2024-05-15 08:57:47.919722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.714 [2024-05-15 08:57:47.919761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.714 [2024-05-15 08:57:47.919777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.714 [2024-05-15 08:57:47.919791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.919807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.919821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.919836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.919850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.919866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.919880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.919895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.919909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.919924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.919938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.919954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.919967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.919983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.919997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.715 [2024-05-15 08:57:47.920609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.920977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.920993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.921006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.921022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.715 [2024-05-15 08:57:47.921036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.715 [2024-05-15 08:57:47.921052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.921971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.921987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.922001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.922016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.922030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.922046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.922059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.922075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.922096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.922112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.922126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.922142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.716 [2024-05-15 08:57:47.922156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.922171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0830 is same with the state(5) to be set 00:17:45.716 [2024-05-15 08:57:47.922189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:45.716 [2024-05-15 08:57:47.922199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:45.716 [2024-05-15 08:57:47.922210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83352 len:8 PRP1 0x0 PRP2 0x0 00:17:45.716 [2024-05-15 08:57:47.922224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.922280] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11b0830 was disconnected and freed. reset controller. 00:17:45.716 [2024-05-15 08:57:47.922299] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:45.716 [2024-05-15 08:57:47.922365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.716 [2024-05-15 08:57:47.922387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.716 [2024-05-15 08:57:47.922403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.716 [2024-05-15 08:57:47.922416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:47.922430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.717 [2024-05-15 08:57:47.922443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:47.922458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.717 [2024-05-15 08:57:47.922473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:47.922487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:45.717 [2024-05-15 08:57:47.922547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11415f0 (9): Bad file descriptor 00:17:45.717 [2024-05-15 08:57:47.926591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:45.717 [2024-05-15 08:57:47.968034] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:45.717 [2024-05-15 08:57:51.624399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.717 [2024-05-15 08:57:51.624457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.624476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.717 [2024-05-15 08:57:51.624520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.624537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.717 [2024-05-15 08:57:51.624550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.624580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.717 [2024-05-15 08:57:51.624597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.624612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11415f0 is same with the state(5) to be set 00:17:45.717 [2024-05-15 08:57:51.625523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.717 [2024-05-15 08:57:51.625554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.717 [2024-05-15 08:57:51.625611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.717 [2024-05-15 08:57:51.625642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.717 [2024-05-15 08:57:51.625672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.717 [2024-05-15 08:57:51.625701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.717 [2024-05-15 08:57:51.625730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.717 [2024-05-15 08:57:51.625759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.717 [2024-05-15 08:57:51.625788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.625827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.625868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.625900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.625929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.625958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.625975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.625989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.717 [2024-05-15 08:57:51.626401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.717 [2024-05-15 08:57:51.626417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.718 [2024-05-15 08:57:51.626547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.718 [2024-05-15 08:57:51.626590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.718 [2024-05-15 08:57:51.626620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.718 [2024-05-15 08:57:51.626659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.718 [2024-05-15 08:57:51.626688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.718 [2024-05-15 08:57:51.626717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.718 [2024-05-15 08:57:51.626746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.626972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.626986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.718 [2024-05-15 08:57:51.627583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.718 [2024-05-15 08:57:51.627598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.627972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.719 [2024-05-15 08:57:51.627986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.719 [2024-05-15 08:57:51.628858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.719 [2024-05-15 08:57:51.628872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.628888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.628901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.628917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.628930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.628945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.628969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.628986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:51.629448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:45.720 [2024-05-15 08:57:51.629491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:45.720 [2024-05-15 08:57:51.629503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84688 len:8 PRP1 0x0 PRP2 0x0 00:17:45.720 [2024-05-15 08:57:51.629516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:51.629576] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x135b300 was disconnected and freed. reset controller. 00:17:45.720 [2024-05-15 08:57:51.629597] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:45.720 [2024-05-15 08:57:51.629612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:45.720 [2024-05-15 08:57:51.633625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:45.720 [2024-05-15 08:57:51.633669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11415f0 (9): Bad file descriptor 00:17:45.720 [2024-05-15 08:57:51.666738] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:45.720 [2024-05-15 08:57:56.326668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.326744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.326787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.326815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.326844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.326870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.326898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.326925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.326951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.326978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.720 [2024-05-15 08:57:56.327686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.720 [2024-05-15 08:57:56.327731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.327763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.327790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.327820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.327847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.327876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.327902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.327930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.327957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.327986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.328953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.721 [2024-05-15 08:57:56.328980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.721 [2024-05-15 08:57:56.329850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.721 [2024-05-15 08:57:56.329865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.329879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.329894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.329907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.329923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.329936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.329952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.329966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.329981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.329995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.722 [2024-05-15 08:57:56.330778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.722 [2024-05-15 08:57:56.330818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.722 [2024-05-15 08:57:56.330848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.722 [2024-05-15 08:57:56.330877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.722 [2024-05-15 08:57:56.330906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.722 [2024-05-15 08:57:56.330945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.722 [2024-05-15 08:57:56.330961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.722 [2024-05-15 08:57:56.330974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.330990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.723 [2024-05-15 08:57:56.331967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.331982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b2620 is same with the state(5) to be set 00:17:45.723 [2024-05-15 08:57:56.332001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:45.723 [2024-05-15 08:57:56.332012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:45.723 [2024-05-15 08:57:56.332023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:8 PRP1 0x0 PRP2 0x0 00:17:45.723 [2024-05-15 08:57:56.332036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.332087] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11b2620 was disconnected and freed. reset controller. 00:17:45.723 [2024-05-15 08:57:56.332122] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:45.723 [2024-05-15 08:57:56.332180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.723 [2024-05-15 08:57:56.332202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.332228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.723 [2024-05-15 08:57:56.332242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.332256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.723 [2024-05-15 08:57:56.332270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.332284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.723 [2024-05-15 08:57:56.332297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.723 [2024-05-15 08:57:56.332311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:45.723 [2024-05-15 08:57:56.332347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11415f0 (9): Bad file descriptor 00:17:45.723 [2024-05-15 08:57:56.336465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:45.723 [2024-05-15 08:57:56.371670] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:45.724 00:17:45.724 Latency(us) 00:17:45.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.724 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:45.724 Verification LBA range: start 0x0 length 0x4000 00:17:45.724 NVMe0n1 : 15.01 8477.62 33.12 203.45 0.00 14710.96 647.91 26095.24 00:17:45.724 =================================================================================================================== 00:17:45.724 Total : 8477.62 33.12 203.45 0.00 14710.96 647.91 26095.24 00:17:45.724 Received shutdown signal, test time was about 15.000000 seconds 00:17:45.724 00:17:45.724 Latency(us) 00:17:45.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.724 =================================================================================================================== 00:17:45.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:45.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=81921 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 81921 /var/tmp/bdevperf.sock 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 81921 ']' 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:45.724 08:58:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:47.096 08:58:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:47.096 08:58:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:17:47.096 08:58:02 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:47.096 [2024-05-15 08:58:03.283734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:47.096 08:58:03 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:47.354 [2024-05-15 08:58:03.564024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:47.354 08:58:03 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:47.919 NVMe0n1 00:17:47.919 08:58:03 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:48.176 00:17:48.176 08:58:04 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:48.433 00:17:48.433 08:58:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:48.433 08:58:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:48.998 08:58:04 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:49.256 08:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:52.581 08:58:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:52.581 08:58:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:52.581 08:58:08 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=82062 00:17:52.581 08:58:08 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:52.581 08:58:08 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 82062 00:17:53.954 0 00:17:53.954 08:58:09 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:53.954 [2024-05-15 08:58:01.971617] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:53.954 [2024-05-15 08:58:01.971764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81921 ] 00:17:53.954 [2024-05-15 08:58:02.138776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.954 [2024-05-15 08:58:02.222395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.954 [2024-05-15 08:58:05.257605] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:53.954 [2024-05-15 08:58:05.257731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.954 [2024-05-15 08:58:05.257756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.954 [2024-05-15 08:58:05.257775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.954 [2024-05-15 08:58:05.257789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.954 [2024-05-15 08:58:05.257803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.954 [2024-05-15 08:58:05.257817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.954 [2024-05-15 08:58:05.257832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.954 [2024-05-15 08:58:05.257845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.954 [2024-05-15 08:58:05.257859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:53.954 [2024-05-15 08:58:05.257912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:53.954 [2024-05-15 08:58:05.257944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb15f0 (9): Bad file descriptor 00:17:53.954 [2024-05-15 08:58:05.267177] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:53.954 Running I/O for 1 seconds... 00:17:53.954 00:17:53.954 Latency(us) 00:17:53.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.954 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:53.954 Verification LBA range: start 0x0 length 0x4000 00:17:53.954 NVMe0n1 : 1.02 2673.26 10.44 0.00 0.00 47493.22 4706.68 47662.55 00:17:53.954 =================================================================================================================== 00:17:53.954 Total : 2673.26 10.44 0.00 0.00 47493.22 4706.68 47662.55 00:17:53.954 08:58:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:53.954 08:58:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:53.954 08:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:54.520 08:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:54.520 08:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:54.777 08:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:55.343 08:58:11 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 81921 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 81921 ']' 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 81921 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81921 00:17:58.676 killing process with pid 81921 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81921' 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 81921 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 81921 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:58.676 08:58:14 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:58.935 rmmod nvme_tcp 00:17:58.935 rmmod nvme_fabrics 00:17:58.935 rmmod nvme_keyring 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 81578 ']' 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 81578 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 81578 ']' 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 81578 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:58.935 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81578 00:17:59.194 killing process with pid 81578 00:17:59.194 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:59.194 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:59.194 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81578' 00:17:59.194 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 81578 00:17:59.194 [2024-05-15 08:58:15.176798] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:59.194 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 81578 00:17:59.194 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:59.194 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:59.194 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:59.194 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.195 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:59.195 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.195 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.195 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.195 08:58:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:59.195 00:17:59.195 real 0m32.767s 00:17:59.195 user 2m9.550s 00:17:59.195 sys 0m4.586s 00:17:59.195 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:59.195 ************************************ 00:17:59.195 END TEST nvmf_failover 00:17:59.195 ************************************ 00:17:59.195 08:58:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:59.454 08:58:15 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:59.454 08:58:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:59.454 08:58:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:59.454 08:58:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:59.454 ************************************ 00:17:59.454 START TEST nvmf_host_discovery 00:17:59.454 ************************************ 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:59.454 * Looking for test storage... 00:17:59.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:59.454 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:59.455 Cannot find device "nvmf_tgt_br" 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.455 Cannot find device "nvmf_tgt_br2" 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:59.455 Cannot find device "nvmf_tgt_br" 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:59.455 Cannot find device "nvmf_tgt_br2" 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:59.455 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:59.713 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:59.713 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:59.713 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:59.713 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:59.713 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:59.713 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:59.713 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:59.713 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:59.713 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:59.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:17:59.714 00:17:59.714 --- 10.0.0.2 ping statistics --- 00:17:59.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.714 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:59.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:59.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:17:59.714 00:17:59.714 --- 10.0.0.3 ping statistics --- 00:17:59.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.714 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:59.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:59.714 00:17:59.714 --- 10.0.0.1 ping statistics --- 00:17:59.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.714 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=82373 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 82373 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 82373 ']' 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:59.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:59.714 08:58:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:59.973 [2024-05-15 08:58:15.946663] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:59.973 [2024-05-15 08:58:15.946755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.973 [2024-05-15 08:58:16.078493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.973 [2024-05-15 08:58:16.159547] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.973 [2024-05-15 08:58:16.159627] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.973 [2024-05-15 08:58:16.159647] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.973 [2024-05-15 08:58:16.159660] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.973 [2024-05-15 08:58:16.159671] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.973 [2024-05-15 08:58:16.159705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.232 [2024-05-15 08:58:16.274413] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.232 [2024-05-15 08:58:16.282319] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:00.232 [2024-05-15 08:58:16.282647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.232 null0 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.232 null1 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=82405 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 82405 /tmp/host.sock 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 82405 ']' 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:00.232 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.232 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.232 [2024-05-15 08:58:16.377588] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:18:00.232 [2024-05-15 08:58:16.377717] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82405 ] 00:18:00.490 [2024-05-15 08:58:16.525130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.490 [2024-05-15 08:58:16.611192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:00.749 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.008 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:01.008 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:01.008 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.008 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:01.008 08:58:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.008 08:58:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:01.008 [2024-05-15 08:58:17.126733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:01.008 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:18:01.266 08:58:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:18:01.832 [2024-05-15 08:58:17.767729] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:01.832 [2024-05-15 08:58:17.767782] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:01.832 [2024-05-15 08:58:17.767804] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:01.832 [2024-05-15 08:58:17.853899] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:01.832 [2024-05-15 08:58:17.909888] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:01.832 [2024-05-15 08:58:17.909935] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.409 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.669 [2024-05-15 08:58:18.695620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:02.669 [2024-05-15 08:58:18.695896] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:02.669 [2024-05-15 08:58:18.695931] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:02.669 [2024-05-15 08:58:18.782007] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.669 [2024-05-15 08:58:18.839455] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:02.669 [2024-05-15 08:58:18.839499] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:02.669 [2024-05-15 08:58:18.839512] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:02.669 08:58:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.046 [2024-05-15 08:58:19.993151] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:04.046 [2024-05-15 08:58:19.993193] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:04.046 [2024-05-15 08:58:19.997786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:04.046 [2024-05-15 08:58:19.997835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:04.046 [2024-05-15 08:58:19.997852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:04.046 [2024-05-15 08:58:19.997862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:04.046 [2024-05-15 08:58:19.997873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:04.046 [2024-05-15 08:58:19.997883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:04.046 [2024-05-15 08:58:19.997893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:04.046 [2024-05-15 08:58:19.997903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:04.046 [2024-05-15 08:58:19.997914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c02d0 is same with the state(5) to be set 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:04.046 08:58:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.046 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.046 [2024-05-15 08:58:20.007828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c02d0 (9): Bad file descriptor 00:18:04.046 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.046 [2024-05-15 08:58:20.017845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:04.046 [2024-05-15 08:58:20.017996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.046 [2024-05-15 08:58:20.018022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c02d0 with addr=10.0.0.2, port=4420 00:18:04.046 [2024-05-15 08:58:20.018035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c02d0 is same with the state(5) to be set 00:18:04.046 [2024-05-15 08:58:20.018054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c02d0 (9): Bad file descriptor 00:18:04.046 [2024-05-15 08:58:20.018071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:04.046 [2024-05-15 08:58:20.018081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:04.046 [2024-05-15 08:58:20.018094] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:04.046 [2024-05-15 08:58:20.018111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:04.046 [2024-05-15 08:58:20.027914] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:04.046 [2024-05-15 08:58:20.028008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.046 [2024-05-15 08:58:20.028031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c02d0 with addr=10.0.0.2, port=4420 00:18:04.047 [2024-05-15 08:58:20.028042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c02d0 is same with the state(5) to be set 00:18:04.047 [2024-05-15 08:58:20.028060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c02d0 (9): Bad file descriptor 00:18:04.047 [2024-05-15 08:58:20.028081] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:04.047 [2024-05-15 08:58:20.028102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:04.047 [2024-05-15 08:58:20.028113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:04.047 [2024-05-15 08:58:20.028129] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:04.047 [2024-05-15 08:58:20.037975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:04.047 [2024-05-15 08:58:20.038084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.047 [2024-05-15 08:58:20.038110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c02d0 with addr=10.0.0.2, port=4420 00:18:04.047 [2024-05-15 08:58:20.038123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c02d0 is same with the state(5) to be set 00:18:04.047 [2024-05-15 08:58:20.038141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c02d0 (9): Bad file descriptor 00:18:04.047 [2024-05-15 08:58:20.038157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:04.047 [2024-05-15 08:58:20.038167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:04.047 [2024-05-15 08:58:20.038178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:04.047 [2024-05-15 08:58:20.038232] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:04.047 [2024-05-15 08:58:20.048043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:04.047 [2024-05-15 08:58:20.048141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.047 [2024-05-15 08:58:20.048164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c02d0 with addr=10.0.0.2, port=4420 00:18:04.047 [2024-05-15 08:58:20.048175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c02d0 is same with the state(5) to be set 00:18:04.047 [2024-05-15 08:58:20.048192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c02d0 (9): Bad file descriptor 00:18:04.047 [2024-05-15 08:58:20.048219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:04.047 [2024-05-15 08:58:20.048231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:04.047 [2024-05-15 08:58:20.048242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:04.047 [2024-05-15 08:58:20.048258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:04.047 [2024-05-15 08:58:20.058099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:04.047 [2024-05-15 08:58:20.058194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.047 [2024-05-15 08:58:20.058218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c02d0 with addr=10.0.0.2, port=4420 00:18:04.047 [2024-05-15 08:58:20.058230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c02d0 is same with the state(5) to be set 00:18:04.047 [2024-05-15 08:58:20.058247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c02d0 (9): Bad file descriptor 00:18:04.047 [2024-05-15 08:58:20.058263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:04.047 [2024-05-15 08:58:20.058272] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:04.047 [2024-05-15 08:58:20.058282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:04.047 [2024-05-15 08:58:20.058298] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:04.047 [2024-05-15 08:58:20.068158] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:04.047 [2024-05-15 08:58:20.068246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.047 [2024-05-15 08:58:20.068267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c02d0 with addr=10.0.0.2, port=4420 00:18:04.047 [2024-05-15 08:58:20.068279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c02d0 is same with the state(5) to be set 00:18:04.047 [2024-05-15 08:58:20.068296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c02d0 (9): Bad file descriptor 00:18:04.047 [2024-05-15 08:58:20.068312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:04.047 [2024-05-15 08:58:20.068322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:04.047 [2024-05-15 08:58:20.068332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:04.047 [2024-05-15 08:58:20.068348] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:04.047 [2024-05-15 08:58:20.078218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:04.047 [2024-05-15 08:58:20.078339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.047 [2024-05-15 08:58:20.078364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c02d0 with addr=10.0.0.2, port=4420 00:18:04.047 [2024-05-15 08:58:20.078377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c02d0 is same with the state(5) to be set 00:18:04.047 [2024-05-15 08:58:20.078396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c02d0 (9): Bad file descriptor 00:18:04.047 [2024-05-15 08:58:20.078413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:04.047 [2024-05-15 08:58:20.078423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:04.047 [2024-05-15 08:58:20.078433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:04.047 [2024-05-15 08:58:20.078449] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:04.047 [2024-05-15 08:58:20.079508] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:04.047 [2024-05-15 08:58:20.079540] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:04.047 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.048 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.307 08:58:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.242 [2024-05-15 08:58:21.368084] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:05.243 [2024-05-15 08:58:21.368138] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:05.243 [2024-05-15 08:58:21.368159] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:05.243 [2024-05-15 08:58:21.454206] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:05.502 [2024-05-15 08:58:21.513789] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:05.502 [2024-05-15 08:58:21.513859] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.502 2024/05/15 08:58:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:05.502 request: 00:18:05.502 { 00:18:05.502 "method": "bdev_nvme_start_discovery", 00:18:05.502 "params": { 00:18:05.502 "name": "nvme", 00:18:05.502 "trtype": "tcp", 00:18:05.502 "traddr": "10.0.0.2", 00:18:05.502 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:05.502 "adrfam": "ipv4", 00:18:05.502 "trsvcid": "8009", 00:18:05.502 "wait_for_attach": true 00:18:05.502 } 00:18:05.502 } 00:18:05.502 Got JSON-RPC error response 00:18:05.502 GoRPCClient: error on JSON-RPC call 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.502 2024/05/15 08:58:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:05.502 request: 00:18:05.502 { 00:18:05.502 "method": "bdev_nvme_start_discovery", 00:18:05.502 "params": { 00:18:05.502 "name": "nvme_second", 00:18:05.502 "trtype": "tcp", 00:18:05.502 "traddr": "10.0.0.2", 00:18:05.502 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:05.502 "adrfam": "ipv4", 00:18:05.502 "trsvcid": "8009", 00:18:05.502 "wait_for_attach": true 00:18:05.502 } 00:18:05.502 } 00:18:05.502 Got JSON-RPC error response 00:18:05.502 GoRPCClient: error on JSON-RPC call 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.502 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.761 08:58:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.695 [2024-05-15 08:58:22.779549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.695 [2024-05-15 08:58:22.779642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d7e30 with addr=10.0.0.2, port=8010 00:18:06.695 [2024-05-15 08:58:22.779665] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:06.695 [2024-05-15 08:58:22.779676] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:06.695 [2024-05-15 08:58:22.779686] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:07.627 [2024-05-15 08:58:23.779538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.627 [2024-05-15 08:58:23.779629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d7e30 with addr=10.0.0.2, port=8010 00:18:07.627 [2024-05-15 08:58:23.779654] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:07.627 [2024-05-15 08:58:23.779666] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:07.627 [2024-05-15 08:58:23.779676] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:08.561 [2024-05-15 08:58:24.779383] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:08.561 2024/05/15 08:58:24 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:08.561 request: 00:18:08.561 { 00:18:08.561 "method": "bdev_nvme_start_discovery", 00:18:08.561 "params": { 00:18:08.561 "name": "nvme_second", 00:18:08.561 "trtype": "tcp", 00:18:08.561 "traddr": "10.0.0.2", 00:18:08.561 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:08.561 "adrfam": "ipv4", 00:18:08.561 "trsvcid": "8010", 00:18:08.561 "attach_timeout_ms": 3000 00:18:08.561 } 00:18:08.561 } 00:18:08.561 Got JSON-RPC error response 00:18:08.561 GoRPCClient: error on JSON-RPC call 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:08.561 08:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 82405 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.820 rmmod nvme_tcp 00:18:08.820 rmmod nvme_fabrics 00:18:08.820 rmmod nvme_keyring 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 82373 ']' 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 82373 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 82373 ']' 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 82373 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82373 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:08.820 killing process with pid 82373 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82373' 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 82373 00:18:08.820 [2024-05-15 08:58:24.955162] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:08.820 08:58:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 82373 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:09.078 00:18:09.078 real 0m9.741s 00:18:09.078 user 0m19.671s 00:18:09.078 sys 0m1.434s 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:09.078 ************************************ 00:18:09.078 END TEST nvmf_host_discovery 00:18:09.078 ************************************ 00:18:09.078 08:58:25 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:09.078 08:58:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:09.078 08:58:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:09.078 08:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.078 ************************************ 00:18:09.078 START TEST nvmf_host_multipath_status 00:18:09.078 ************************************ 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:09.078 * Looking for test storage... 00:18:09.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.078 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:09.337 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:09.338 Cannot find device "nvmf_tgt_br" 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.338 Cannot find device "nvmf_tgt_br2" 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:09.338 Cannot find device "nvmf_tgt_br" 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:09.338 Cannot find device "nvmf_tgt_br2" 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:09.338 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:09.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:18:09.597 00:18:09.597 --- 10.0.0.2 ping statistics --- 00:18:09.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.597 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:09.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:09.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:18:09.597 00:18:09.597 --- 10.0.0.3 ping statistics --- 00:18:09.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.597 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:09.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:09.597 00:18:09.597 --- 10.0.0.1 ping statistics --- 00:18:09.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.597 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:09.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=82869 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 82869 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 82869 ']' 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:09.597 08:58:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:09.597 [2024-05-15 08:58:25.745480] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:18:09.597 [2024-05-15 08:58:25.746242] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.855 [2024-05-15 08:58:25.884525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:09.855 [2024-05-15 08:58:25.945358] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.855 [2024-05-15 08:58:25.945606] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.855 [2024-05-15 08:58:25.945744] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.855 [2024-05-15 08:58:25.945961] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.855 [2024-05-15 08:58:25.946073] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.855 [2024-05-15 08:58:25.946265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.855 [2024-05-15 08:58:25.946276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.788 08:58:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:10.788 08:58:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:18:10.788 08:58:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.788 08:58:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.788 08:58:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:10.788 08:58:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.788 08:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82869 00:18:10.788 08:58:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:11.046 [2024-05-15 08:58:27.114029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.046 08:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:11.304 Malloc0 00:18:11.304 08:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:11.562 08:58:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:12.128 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.386 [2024-05-15 08:58:28.373730] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:12.386 [2024-05-15 08:58:28.373997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.386 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:12.386 [2024-05-15 08:58:28.618091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:12.643 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=82973 00:18:12.643 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:12.643 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.643 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 82973 /var/tmp/bdevperf.sock 00:18:12.643 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 82973 ']' 00:18:12.643 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.643 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:12.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.643 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.643 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:12.643 08:58:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:13.620 08:58:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:13.620 08:58:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:18:13.620 08:58:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:13.878 08:58:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:14.136 Nvme0n1 00:18:14.393 08:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:14.701 Nvme0n1 00:18:14.701 08:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:14.701 08:58:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:17.224 08:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:17.224 08:58:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:17.224 08:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:17.224 08:58:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:18.595 08:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:18.595 08:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:18.595 08:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.595 08:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:18.853 08:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.853 08:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:18.853 08:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.853 08:58:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:19.110 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:19.110 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:19.110 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.110 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:19.369 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.369 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:19.369 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.369 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:19.934 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.934 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:19.934 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:19.934 08:58:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.192 08:58:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:20.192 08:58:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:20.192 08:58:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.192 08:58:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:20.448 08:58:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:20.448 08:58:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:20.448 08:58:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:20.707 08:58:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:20.967 08:58:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:22.342 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:22.342 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:22.342 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.342 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:22.600 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:22.600 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:22.600 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.600 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:22.857 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.857 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:22.857 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.857 08:58:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:23.115 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.115 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:23.115 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.115 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:23.373 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.373 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:23.373 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.373 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:23.631 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.631 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:23.631 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.631 08:58:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:24.197 08:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:24.197 08:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:24.197 08:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:24.197 08:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:24.455 08:58:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:25.829 08:58:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:25.829 08:58:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:25.829 08:58:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:25.829 08:58:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.829 08:58:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:25.829 08:58:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:25.829 08:58:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.829 08:58:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:26.087 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:26.087 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:26.087 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.087 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:26.344 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.344 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:26.344 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.344 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:26.602 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.602 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:26.602 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:26.602 08:58:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:27.168 08:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:27.168 08:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:27.168 08:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:27.168 08:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:27.425 08:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:27.425 08:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:27.425 08:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:27.684 08:58:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:28.251 08:58:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:29.184 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:29.184 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:29.184 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.184 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:29.442 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.442 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:29.442 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.442 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:29.699 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:29.699 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:29.699 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.699 08:58:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:30.263 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:30.263 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:30.263 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:30.263 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:30.522 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:30.522 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:30.522 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:30.522 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:30.780 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:30.780 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:30.780 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:30.780 08:58:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:31.038 08:58:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:31.038 08:58:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:31.038 08:58:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:31.604 08:58:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:31.862 08:58:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:32.795 08:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:32.795 08:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:32.795 08:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:32.795 08:58:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.053 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:33.053 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:33.053 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.053 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:33.312 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:33.312 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:33.312 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.312 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:33.571 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:33.571 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:33.571 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.571 08:58:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:34.148 08:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.148 08:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:34.148 08:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.148 08:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:34.408 08:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:34.408 08:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:34.408 08:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.408 08:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:34.974 08:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:34.975 08:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:34.975 08:58:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:35.233 08:58:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:35.799 08:58:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:36.734 08:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:36.734 08:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:36.734 08:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:36.734 08:58:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.993 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:36.993 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:36.993 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:36.993 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.252 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.252 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:37.252 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.252 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:37.510 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.510 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:37.510 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:37.510 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.768 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.768 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:37.768 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.768 08:58:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:38.027 08:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:38.027 08:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:38.027 08:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.027 08:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:38.293 08:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.293 08:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:38.553 08:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:38.553 08:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:38.811 08:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:39.070 08:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:40.445 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:40.445 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:40.445 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:40.445 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.445 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.445 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:40.445 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:40.445 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.704 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.704 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:40.704 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.704 08:58:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:40.962 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.962 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:40.962 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.962 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:41.221 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.221 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:41.221 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.221 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:41.788 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.788 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:41.788 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.788 08:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:42.047 08:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:42.047 08:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:42.047 08:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:42.305 08:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:42.564 08:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:43.499 08:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:43.499 08:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:43.499 08:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.499 08:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:43.757 08:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:43.758 08:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:43.758 08:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:43.758 08:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.015 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.015 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:44.015 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.015 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:44.273 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.274 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:44.274 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.274 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:44.532 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.532 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:44.532 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.532 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:44.790 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.790 08:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:44.790 08:59:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.790 08:59:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:45.048 08:59:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:45.048 08:59:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:45.048 08:59:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:45.306 08:59:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:45.565 08:59:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:46.936 08:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:46.936 08:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:46.936 08:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.937 08:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:46.937 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.937 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:46.937 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.937 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:47.194 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.194 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:47.194 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.194 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:47.452 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.452 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:47.452 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:47.452 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.709 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.709 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:47.710 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.710 08:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:47.969 08:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.969 08:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:47.969 08:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.969 08:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:48.535 08:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:48.535 08:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:48.535 08:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:48.535 08:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:48.793 08:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:50.167 08:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:50.167 08:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:50.167 08:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:50.167 08:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.167 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.168 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:50.168 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.168 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:50.425 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:50.425 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:50.425 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.425 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:50.683 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.683 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:50.683 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.683 08:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:50.941 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.941 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:50.941 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.941 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:51.200 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.200 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:51.200 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.200 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:51.515 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:51.515 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 82973 00:18:51.515 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 82973 ']' 00:18:51.516 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 82973 00:18:51.516 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:18:51.516 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:51.516 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82973 00:18:51.516 killing process with pid 82973 00:18:51.516 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:51.516 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:51.516 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82973' 00:18:51.516 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 82973 00:18:51.516 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 82973 00:18:51.777 Connection closed with partial response: 00:18:51.777 00:18:51.777 00:18:51.777 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 82973 00:18:51.777 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:51.777 [2024-05-15 08:58:28.681145] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:18:51.777 [2024-05-15 08:58:28.681246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82973 ] 00:18:51.777 [2024-05-15 08:58:28.837539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.777 [2024-05-15 08:58:28.919717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.777 Running I/O for 90 seconds... 00:18:51.777 [2024-05-15 08:58:47.608073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.777 [2024-05-15 08:58:47.608180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.608259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.608295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.608329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.608347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.608379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.608405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.608440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.608465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.608489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.608505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.608528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.608552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.608612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.608652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.609716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.609750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.609779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.609798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.609829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.609884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.609927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.609954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.609987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.610968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.610987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.611022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.777 [2024-05-15 08:58:47.611052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:51.777 [2024-05-15 08:58:47.611083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.611966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.611982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.612968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.612991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.613028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.613057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.613093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.613111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.613138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.613164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.613192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.613220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.613250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.778 [2024-05-15 08:58:47.613280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.613309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.613325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.778 [2024-05-15 08:58:47.613355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.778 [2024-05-15 08:58:47.613381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.613410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.613438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.613725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.613753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.613787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.613811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.613850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.613876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.613915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.613934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.613962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.613986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.614936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.614979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.615949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.615966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.616002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.616026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.616076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.779 [2024-05-15 08:58:47.616119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:51.779 [2024-05-15 08:58:47.616160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:58:47.616921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:58:47.616949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.780 [2024-05-15 08:59:04.973456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.780 [2024-05-15 08:59:04.973541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.780 [2024-05-15 08:59:04.973617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.780 [2024-05-15 08:59:04.973660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.973699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.973737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.973775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.973813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.973850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.973888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.973958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.973980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.973995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.974033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.974070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.780 [2024-05-15 08:59:04.974107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.780 [2024-05-15 08:59:04.974145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.780 [2024-05-15 08:59:04.974183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.780 [2024-05-15 08:59:04.974223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.974261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.974299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.974337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.974375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.974422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.974462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.780 [2024-05-15 08:59:04.974500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.780 [2024-05-15 08:59:04.974538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:51.780 [2024-05-15 08:59:04.974574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.780 [2024-05-15 08:59:04.974593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.974616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.974632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.974655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.974671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.976926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.976962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.977020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.977059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.977097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.977137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.977190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.977232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.977271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.977308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.977346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.977384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.977422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.977459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.977497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.977535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.977557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.977595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.978539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.978584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.978614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.978631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.978668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.978686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.978709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.781 [2024-05-15 08:59:04.978725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.978747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.978763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.978786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.978802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.978823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.978839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.978861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.978877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.978899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.978915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.978936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.978952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.978974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.978989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.979011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.979027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:51.781 [2024-05-15 08:59:04.979049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.781 [2024-05-15 08:59:04.979064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:51.781 Received shutdown signal, test time was about 36.700565 seconds 00:18:51.781 00:18:51.781 Latency(us) 00:18:51.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.781 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:51.782 Verification LBA range: start 0x0 length 0x4000 00:18:51.782 Nvme0n1 : 36.70 7375.18 28.81 0.00 0.00 17322.86 314.65 5033164.80 00:18:51.782 =================================================================================================================== 00:18:51.782 Total : 7375.18 28.81 0.00 0.00 17322.86 314.65 5033164.80 00:18:51.782 08:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:52.039 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:52.039 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:52.039 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:52.039 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:52.039 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:18:52.296 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:52.296 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:18:52.296 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:52.296 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:52.296 rmmod nvme_tcp 00:18:52.296 rmmod nvme_fabrics 00:18:52.296 rmmod nvme_keyring 00:18:52.296 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:52.296 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 82869 ']' 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 82869 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 82869 ']' 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 82869 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82869 00:18:52.297 killing process with pid 82869 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82869' 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 82869 00:18:52.297 [2024-05-15 08:59:08.358405] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:52.297 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 82869 00:18:52.555 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:52.555 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:52.555 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:52.555 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.555 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:52.555 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.555 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.555 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.555 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:52.555 ************************************ 00:18:52.555 END TEST nvmf_host_multipath_status 00:18:52.555 ************************************ 00:18:52.555 00:18:52.555 real 0m43.355s 00:18:52.555 user 2m23.634s 00:18:52.555 sys 0m10.210s 00:18:52.555 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:52.555 08:59:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:52.555 08:59:08 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:52.555 08:59:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:52.555 08:59:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:52.555 08:59:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:52.555 ************************************ 00:18:52.555 START TEST nvmf_discovery_remove_ifc 00:18:52.555 ************************************ 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:52.555 * Looking for test storage... 00:18:52.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.555 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:52.556 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:52.814 Cannot find device "nvmf_tgt_br" 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.814 Cannot find device "nvmf_tgt_br2" 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:52.814 Cannot find device "nvmf_tgt_br" 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:52.814 Cannot find device "nvmf_tgt_br2" 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:52.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:52.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:52.814 08:59:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:52.814 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:52.814 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:52.814 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:52.814 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:52.814 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:52.814 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:53.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:18:53.073 00:18:53.073 --- 10.0.0.2 ping statistics --- 00:18:53.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.073 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:53.073 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:53.073 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:53.073 00:18:53.073 --- 10.0.0.3 ping statistics --- 00:18:53.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.073 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:53.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:53.073 00:18:53.073 --- 10.0.0.1 ping statistics --- 00:18:53.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.073 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:53.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=84314 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 84314 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 84314 ']' 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:53.073 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.074 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:53.074 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:53.074 08:59:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:53.074 [2024-05-15 08:59:09.176219] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:18:53.074 [2024-05-15 08:59:09.176308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.333 [2024-05-15 08:59:09.316176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.333 [2024-05-15 08:59:09.383554] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.333 [2024-05-15 08:59:09.383821] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.333 [2024-05-15 08:59:09.384049] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.333 [2024-05-15 08:59:09.384227] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.333 [2024-05-15 08:59:09.384341] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.333 [2024-05-15 08:59:09.384414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.271 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:54.271 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:18:54.271 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.271 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.271 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:54.271 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:54.272 [2024-05-15 08:59:10.247879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.272 [2024-05-15 08:59:10.255809] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:54.272 [2024-05-15 08:59:10.256213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:54.272 null0 00:18:54.272 [2024-05-15 08:59:10.287960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.272 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=84365 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 84365 /tmp/host.sock 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 84365 ']' 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:54.272 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:54.272 [2024-05-15 08:59:10.363697] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:18:54.272 [2024-05-15 08:59:10.363789] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84365 ] 00:18:54.272 [2024-05-15 08:59:10.502324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.530 [2024-05-15 08:59:10.602210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.530 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:54.530 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:18:54.530 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:54.530 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:54.530 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.530 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:54.530 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.530 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:54.530 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.530 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:54.530 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.531 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:54.531 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.531 08:59:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.907 [2024-05-15 08:59:11.731074] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:55.907 [2024-05-15 08:59:11.731289] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:55.907 [2024-05-15 08:59:11.731353] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:55.907 [2024-05-15 08:59:11.817246] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:55.907 [2024-05-15 08:59:11.873549] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:55.907 [2024-05-15 08:59:11.873815] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:55.907 [2024-05-15 08:59:11.873893] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:55.907 [2024-05-15 08:59:11.873992] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:55.907 [2024-05-15 08:59:11.874146] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:55.907 [2024-05-15 08:59:11.879482] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a90760 was disconnected and freed. delete nvme_qpair. 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:55.907 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:55.908 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:55.908 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:55.908 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:55.908 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.908 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.908 08:59:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.908 08:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:55.908 08:59:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:56.845 08:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:56.845 08:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:56.845 08:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:56.845 08:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:56.845 08:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:56.845 08:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.845 08:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:56.845 08:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.101 08:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:57.101 08:59:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:58.036 08:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:58.036 08:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:58.036 08:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.036 08:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:58.036 08:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:58.036 08:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:58.036 08:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:58.036 08:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.036 08:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:58.036 08:59:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:58.972 08:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:58.972 08:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:58.972 08:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.972 08:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:58.972 08:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:58.972 08:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:58.972 08:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:58.972 08:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.972 08:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:58.972 08:59:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:00.347 08:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:00.347 08:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:00.347 08:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:00.347 08:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.347 08:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:00.347 08:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:00.347 08:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:00.347 08:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.347 08:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:00.347 08:59:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:01.283 08:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:01.283 08:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:01.283 08:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:01.283 08:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:01.283 08:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.283 08:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:01.283 08:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:01.283 08:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.283 [2024-05-15 08:59:17.301467] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:01.283 [2024-05-15 08:59:17.301575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.283 [2024-05-15 08:59:17.301598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.283 [2024-05-15 08:59:17.301617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.283 [2024-05-15 08:59:17.301631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.283 [2024-05-15 08:59:17.301645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.283 [2024-05-15 08:59:17.301660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.283 [2024-05-15 08:59:17.301683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.283 [2024-05-15 08:59:17.301698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.283 [2024-05-15 08:59:17.301715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.283 [2024-05-15 08:59:17.301725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.283 [2024-05-15 08:59:17.301735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5a3d0 is same with the state(5) to be set 00:19:01.283 [2024-05-15 08:59:17.311461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5a3d0 (9): Bad file descriptor 00:19:01.283 [2024-05-15 08:59:17.321497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:01.283 08:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:01.283 08:59:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:02.219 [2024-05-15 08:59:18.327703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:02.219 [2024-05-15 08:59:18.327842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5a3d0 with addr=10.0.0.2, port=4420 00:19:02.219 [2024-05-15 08:59:18.327880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5a3d0 is same with the state(5) to be set 00:19:02.219 [2024-05-15 08:59:18.327959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5a3d0 (9): Bad file descriptor 00:19:02.219 [2024-05-15 08:59:18.328160] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:02.219 [2024-05-15 08:59:18.328201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:02.219 [2024-05-15 08:59:18.328220] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:02.219 [2024-05-15 08:59:18.328240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:02.219 [2024-05-15 08:59:18.328282] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:02.219 [2024-05-15 08:59:18.328303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:02.219 08:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:02.219 08:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:02.219 08:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:02.219 08:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.219 08:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:02.219 08:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:02.219 08:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:02.219 08:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.219 08:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:02.219 08:59:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:03.178 [2024-05-15 08:59:19.328381] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:03.178 [2024-05-15 08:59:19.328483] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:03.178 [2024-05-15 08:59:19.328576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.178 [2024-05-15 08:59:19.328596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.178 [2024-05-15 08:59:19.328611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.178 [2024-05-15 08:59:19.328620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.178 [2024-05-15 08:59:19.328630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.178 [2024-05-15 08:59:19.328640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.178 [2024-05-15 08:59:19.328650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.178 [2024-05-15 08:59:19.328659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.178 [2024-05-15 08:59:19.328669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.178 [2024-05-15 08:59:19.328679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.178 [2024-05-15 08:59:19.328689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:03.178 [2024-05-15 08:59:19.328856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f91c0 (9): Bad file descriptor 00:19:03.178 [2024-05-15 08:59:19.329867] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:03.178 [2024-05-15 08:59:19.329893] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:03.178 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:03.179 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:03.179 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.179 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:03.179 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:03.179 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:03.179 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:03.437 08:59:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:04.385 08:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:04.385 08:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:04.385 08:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.385 08:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:04.385 08:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.385 08:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:04.385 08:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:04.385 08:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.385 08:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:04.385 08:59:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:05.320 [2024-05-15 08:59:21.333448] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:05.320 [2024-05-15 08:59:21.333493] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:05.320 [2024-05-15 08:59:21.333513] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:05.320 [2024-05-15 08:59:21.421604] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:05.320 [2024-05-15 08:59:21.483737] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:05.320 [2024-05-15 08:59:21.483806] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:05.320 [2024-05-15 08:59:21.483832] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:05.320 [2024-05-15 08:59:21.483852] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:05.320 [2024-05-15 08:59:21.483862] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:05.320 [2024-05-15 08:59:21.492045] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a71280 was disconnected and freed. delete nvme_qpair. 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 84365 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 84365 ']' 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 84365 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84365 00:19:05.578 killing process with pid 84365 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84365' 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 84365 00:19:05.578 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 84365 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.837 rmmod nvme_tcp 00:19:05.837 rmmod nvme_fabrics 00:19:05.837 rmmod nvme_keyring 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 84314 ']' 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 84314 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 84314 ']' 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 84314 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84314 00:19:05.837 killing process with pid 84314 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84314' 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 84314 00:19:05.837 [2024-05-15 08:59:21.978278] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:05.837 08:59:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 84314 00:19:06.094 08:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.094 08:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.094 08:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.094 08:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.094 08:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.094 08:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.094 08:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.094 08:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.094 08:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:06.094 00:19:06.094 real 0m13.544s 00:19:06.094 user 0m24.108s 00:19:06.094 sys 0m1.499s 00:19:06.094 08:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:06.094 08:59:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 ************************************ 00:19:06.094 END TEST nvmf_discovery_remove_ifc 00:19:06.094 ************************************ 00:19:06.094 08:59:22 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:06.095 08:59:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:06.095 08:59:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:06.095 08:59:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.095 ************************************ 00:19:06.095 START TEST nvmf_identify_kernel_target 00:19:06.095 ************************************ 00:19:06.095 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:06.095 * Looking for test storage... 00:19:06.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:06.095 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:06.095 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:06.095 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.095 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.095 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.095 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.353 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:06.354 Cannot find device "nvmf_tgt_br" 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.354 Cannot find device "nvmf_tgt_br2" 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:06.354 Cannot find device "nvmf_tgt_br" 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:06.354 Cannot find device "nvmf_tgt_br2" 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:06.354 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:06.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:19:06.613 00:19:06.613 --- 10.0.0.2 ping statistics --- 00:19:06.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.613 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:06.613 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:06.613 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:19:06.613 00:19:06.613 --- 10.0.0.3 ping statistics --- 00:19:06.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.613 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:06.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:06.613 00:19:06.613 --- 10.0.0.1 ping statistics --- 00:19:06.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.613 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:06.613 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:19:06.614 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:06.614 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:06.614 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:06.614 08:59:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:06.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:06.872 Waiting for block devices as requested 00:19:07.132 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:07.132 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:07.132 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:07.132 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:07.132 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:07.132 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:19:07.132 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:07.132 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:07.132 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:07.132 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:07.132 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:07.132 No valid GPT data, bailing 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:07.389 No valid GPT data, bailing 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:07.389 No valid GPT data, bailing 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:07.389 No valid GPT data, bailing 00:19:07.389 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:07.647 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -a 10.0.0.1 -t tcp -s 4420 00:19:07.648 00:19:07.648 Discovery Log Number of Records 2, Generation counter 2 00:19:07.648 =====Discovery Log Entry 0====== 00:19:07.648 trtype: tcp 00:19:07.648 adrfam: ipv4 00:19:07.648 subtype: current discovery subsystem 00:19:07.648 treq: not specified, sq flow control disable supported 00:19:07.648 portid: 1 00:19:07.648 trsvcid: 4420 00:19:07.648 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:07.648 traddr: 10.0.0.1 00:19:07.648 eflags: none 00:19:07.648 sectype: none 00:19:07.648 =====Discovery Log Entry 1====== 00:19:07.648 trtype: tcp 00:19:07.648 adrfam: ipv4 00:19:07.648 subtype: nvme subsystem 00:19:07.648 treq: not specified, sq flow control disable supported 00:19:07.648 portid: 1 00:19:07.648 trsvcid: 4420 00:19:07.648 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:07.648 traddr: 10.0.0.1 00:19:07.648 eflags: none 00:19:07.648 sectype: none 00:19:07.648 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:07.648 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:07.648 ===================================================== 00:19:07.648 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:07.648 ===================================================== 00:19:07.648 Controller Capabilities/Features 00:19:07.648 ================================ 00:19:07.648 Vendor ID: 0000 00:19:07.648 Subsystem Vendor ID: 0000 00:19:07.648 Serial Number: 9c5c6cae4f16ae32a1e4 00:19:07.648 Model Number: Linux 00:19:07.648 Firmware Version: 6.7.0-68 00:19:07.648 Recommended Arb Burst: 0 00:19:07.648 IEEE OUI Identifier: 00 00 00 00:19:07.648 Multi-path I/O 00:19:07.648 May have multiple subsystem ports: No 00:19:07.648 May have multiple controllers: No 00:19:07.648 Associated with SR-IOV VF: No 00:19:07.648 Max Data Transfer Size: Unlimited 00:19:07.648 Max Number of Namespaces: 0 00:19:07.648 Max Number of I/O Queues: 1024 00:19:07.648 NVMe Specification Version (VS): 1.3 00:19:07.648 NVMe Specification Version (Identify): 1.3 00:19:07.648 Maximum Queue Entries: 1024 00:19:07.648 Contiguous Queues Required: No 00:19:07.648 Arbitration Mechanisms Supported 00:19:07.648 Weighted Round Robin: Not Supported 00:19:07.648 Vendor Specific: Not Supported 00:19:07.648 Reset Timeout: 7500 ms 00:19:07.648 Doorbell Stride: 4 bytes 00:19:07.648 NVM Subsystem Reset: Not Supported 00:19:07.648 Command Sets Supported 00:19:07.648 NVM Command Set: Supported 00:19:07.648 Boot Partition: Not Supported 00:19:07.648 Memory Page Size Minimum: 4096 bytes 00:19:07.648 Memory Page Size Maximum: 4096 bytes 00:19:07.648 Persistent Memory Region: Not Supported 00:19:07.648 Optional Asynchronous Events Supported 00:19:07.648 Namespace Attribute Notices: Not Supported 00:19:07.648 Firmware Activation Notices: Not Supported 00:19:07.648 ANA Change Notices: Not Supported 00:19:07.648 PLE Aggregate Log Change Notices: Not Supported 00:19:07.648 LBA Status Info Alert Notices: Not Supported 00:19:07.648 EGE Aggregate Log Change Notices: Not Supported 00:19:07.648 Normal NVM Subsystem Shutdown event: Not Supported 00:19:07.648 Zone Descriptor Change Notices: Not Supported 00:19:07.648 Discovery Log Change Notices: Supported 00:19:07.648 Controller Attributes 00:19:07.648 128-bit Host Identifier: Not Supported 00:19:07.648 Non-Operational Permissive Mode: Not Supported 00:19:07.648 NVM Sets: Not Supported 00:19:07.648 Read Recovery Levels: Not Supported 00:19:07.648 Endurance Groups: Not Supported 00:19:07.648 Predictable Latency Mode: Not Supported 00:19:07.648 Traffic Based Keep ALive: Not Supported 00:19:07.648 Namespace Granularity: Not Supported 00:19:07.648 SQ Associations: Not Supported 00:19:07.648 UUID List: Not Supported 00:19:07.648 Multi-Domain Subsystem: Not Supported 00:19:07.648 Fixed Capacity Management: Not Supported 00:19:07.648 Variable Capacity Management: Not Supported 00:19:07.648 Delete Endurance Group: Not Supported 00:19:07.648 Delete NVM Set: Not Supported 00:19:07.648 Extended LBA Formats Supported: Not Supported 00:19:07.648 Flexible Data Placement Supported: Not Supported 00:19:07.648 00:19:07.648 Controller Memory Buffer Support 00:19:07.648 ================================ 00:19:07.648 Supported: No 00:19:07.648 00:19:07.648 Persistent Memory Region Support 00:19:07.648 ================================ 00:19:07.648 Supported: No 00:19:07.648 00:19:07.648 Admin Command Set Attributes 00:19:07.648 ============================ 00:19:07.648 Security Send/Receive: Not Supported 00:19:07.648 Format NVM: Not Supported 00:19:07.648 Firmware Activate/Download: Not Supported 00:19:07.648 Namespace Management: Not Supported 00:19:07.648 Device Self-Test: Not Supported 00:19:07.648 Directives: Not Supported 00:19:07.648 NVMe-MI: Not Supported 00:19:07.648 Virtualization Management: Not Supported 00:19:07.648 Doorbell Buffer Config: Not Supported 00:19:07.648 Get LBA Status Capability: Not Supported 00:19:07.648 Command & Feature Lockdown Capability: Not Supported 00:19:07.648 Abort Command Limit: 1 00:19:07.648 Async Event Request Limit: 1 00:19:07.648 Number of Firmware Slots: N/A 00:19:07.648 Firmware Slot 1 Read-Only: N/A 00:19:07.648 Firmware Activation Without Reset: N/A 00:19:07.648 Multiple Update Detection Support: N/A 00:19:07.648 Firmware Update Granularity: No Information Provided 00:19:07.648 Per-Namespace SMART Log: No 00:19:07.648 Asymmetric Namespace Access Log Page: Not Supported 00:19:07.648 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:07.648 Command Effects Log Page: Not Supported 00:19:07.648 Get Log Page Extended Data: Supported 00:19:07.648 Telemetry Log Pages: Not Supported 00:19:07.648 Persistent Event Log Pages: Not Supported 00:19:07.648 Supported Log Pages Log Page: May Support 00:19:07.648 Commands Supported & Effects Log Page: Not Supported 00:19:07.648 Feature Identifiers & Effects Log Page:May Support 00:19:07.648 NVMe-MI Commands & Effects Log Page: May Support 00:19:07.648 Data Area 4 for Telemetry Log: Not Supported 00:19:07.648 Error Log Page Entries Supported: 1 00:19:07.648 Keep Alive: Not Supported 00:19:07.648 00:19:07.648 NVM Command Set Attributes 00:19:07.648 ========================== 00:19:07.648 Submission Queue Entry Size 00:19:07.648 Max: 1 00:19:07.648 Min: 1 00:19:07.648 Completion Queue Entry Size 00:19:07.648 Max: 1 00:19:07.648 Min: 1 00:19:07.648 Number of Namespaces: 0 00:19:07.648 Compare Command: Not Supported 00:19:07.648 Write Uncorrectable Command: Not Supported 00:19:07.648 Dataset Management Command: Not Supported 00:19:07.648 Write Zeroes Command: Not Supported 00:19:07.648 Set Features Save Field: Not Supported 00:19:07.648 Reservations: Not Supported 00:19:07.648 Timestamp: Not Supported 00:19:07.648 Copy: Not Supported 00:19:07.648 Volatile Write Cache: Not Present 00:19:07.648 Atomic Write Unit (Normal): 1 00:19:07.648 Atomic Write Unit (PFail): 1 00:19:07.648 Atomic Compare & Write Unit: 1 00:19:07.648 Fused Compare & Write: Not Supported 00:19:07.648 Scatter-Gather List 00:19:07.648 SGL Command Set: Supported 00:19:07.648 SGL Keyed: Not Supported 00:19:07.648 SGL Bit Bucket Descriptor: Not Supported 00:19:07.648 SGL Metadata Pointer: Not Supported 00:19:07.648 Oversized SGL: Not Supported 00:19:07.648 SGL Metadata Address: Not Supported 00:19:07.648 SGL Offset: Supported 00:19:07.648 Transport SGL Data Block: Not Supported 00:19:07.648 Replay Protected Memory Block: Not Supported 00:19:07.648 00:19:07.648 Firmware Slot Information 00:19:07.648 ========================= 00:19:07.648 Active slot: 0 00:19:07.648 00:19:07.648 00:19:07.648 Error Log 00:19:07.648 ========= 00:19:07.648 00:19:07.648 Active Namespaces 00:19:07.648 ================= 00:19:07.648 Discovery Log Page 00:19:07.648 ================== 00:19:07.648 Generation Counter: 2 00:19:07.648 Number of Records: 2 00:19:07.649 Record Format: 0 00:19:07.649 00:19:07.649 Discovery Log Entry 0 00:19:07.649 ---------------------- 00:19:07.649 Transport Type: 3 (TCP) 00:19:07.649 Address Family: 1 (IPv4) 00:19:07.649 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:07.649 Entry Flags: 00:19:07.649 Duplicate Returned Information: 0 00:19:07.649 Explicit Persistent Connection Support for Discovery: 0 00:19:07.649 Transport Requirements: 00:19:07.649 Secure Channel: Not Specified 00:19:07.649 Port ID: 1 (0x0001) 00:19:07.649 Controller ID: 65535 (0xffff) 00:19:07.649 Admin Max SQ Size: 32 00:19:07.649 Transport Service Identifier: 4420 00:19:07.649 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:07.649 Transport Address: 10.0.0.1 00:19:07.649 Discovery Log Entry 1 00:19:07.649 ---------------------- 00:19:07.649 Transport Type: 3 (TCP) 00:19:07.649 Address Family: 1 (IPv4) 00:19:07.649 Subsystem Type: 2 (NVM Subsystem) 00:19:07.649 Entry Flags: 00:19:07.649 Duplicate Returned Information: 0 00:19:07.649 Explicit Persistent Connection Support for Discovery: 0 00:19:07.649 Transport Requirements: 00:19:07.649 Secure Channel: Not Specified 00:19:07.649 Port ID: 1 (0x0001) 00:19:07.649 Controller ID: 65535 (0xffff) 00:19:07.649 Admin Max SQ Size: 32 00:19:07.649 Transport Service Identifier: 4420 00:19:07.649 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:07.649 Transport Address: 10.0.0.1 00:19:07.649 08:59:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:07.908 get_feature(0x01) failed 00:19:07.908 get_feature(0x02) failed 00:19:07.908 get_feature(0x04) failed 00:19:07.908 ===================================================== 00:19:07.908 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:07.908 ===================================================== 00:19:07.908 Controller Capabilities/Features 00:19:07.908 ================================ 00:19:07.908 Vendor ID: 0000 00:19:07.908 Subsystem Vendor ID: 0000 00:19:07.908 Serial Number: a4dfdc3b364dcb8951da 00:19:07.908 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:07.908 Firmware Version: 6.7.0-68 00:19:07.908 Recommended Arb Burst: 6 00:19:07.908 IEEE OUI Identifier: 00 00 00 00:19:07.908 Multi-path I/O 00:19:07.908 May have multiple subsystem ports: Yes 00:19:07.908 May have multiple controllers: Yes 00:19:07.908 Associated with SR-IOV VF: No 00:19:07.908 Max Data Transfer Size: Unlimited 00:19:07.908 Max Number of Namespaces: 1024 00:19:07.908 Max Number of I/O Queues: 128 00:19:07.908 NVMe Specification Version (VS): 1.3 00:19:07.908 NVMe Specification Version (Identify): 1.3 00:19:07.908 Maximum Queue Entries: 1024 00:19:07.908 Contiguous Queues Required: No 00:19:07.908 Arbitration Mechanisms Supported 00:19:07.908 Weighted Round Robin: Not Supported 00:19:07.908 Vendor Specific: Not Supported 00:19:07.908 Reset Timeout: 7500 ms 00:19:07.908 Doorbell Stride: 4 bytes 00:19:07.908 NVM Subsystem Reset: Not Supported 00:19:07.908 Command Sets Supported 00:19:07.908 NVM Command Set: Supported 00:19:07.908 Boot Partition: Not Supported 00:19:07.908 Memory Page Size Minimum: 4096 bytes 00:19:07.908 Memory Page Size Maximum: 4096 bytes 00:19:07.908 Persistent Memory Region: Not Supported 00:19:07.908 Optional Asynchronous Events Supported 00:19:07.908 Namespace Attribute Notices: Supported 00:19:07.908 Firmware Activation Notices: Not Supported 00:19:07.908 ANA Change Notices: Supported 00:19:07.908 PLE Aggregate Log Change Notices: Not Supported 00:19:07.908 LBA Status Info Alert Notices: Not Supported 00:19:07.908 EGE Aggregate Log Change Notices: Not Supported 00:19:07.908 Normal NVM Subsystem Shutdown event: Not Supported 00:19:07.908 Zone Descriptor Change Notices: Not Supported 00:19:07.908 Discovery Log Change Notices: Not Supported 00:19:07.908 Controller Attributes 00:19:07.908 128-bit Host Identifier: Supported 00:19:07.908 Non-Operational Permissive Mode: Not Supported 00:19:07.908 NVM Sets: Not Supported 00:19:07.908 Read Recovery Levels: Not Supported 00:19:07.908 Endurance Groups: Not Supported 00:19:07.908 Predictable Latency Mode: Not Supported 00:19:07.908 Traffic Based Keep ALive: Supported 00:19:07.908 Namespace Granularity: Not Supported 00:19:07.908 SQ Associations: Not Supported 00:19:07.908 UUID List: Not Supported 00:19:07.908 Multi-Domain Subsystem: Not Supported 00:19:07.908 Fixed Capacity Management: Not Supported 00:19:07.908 Variable Capacity Management: Not Supported 00:19:07.908 Delete Endurance Group: Not Supported 00:19:07.908 Delete NVM Set: Not Supported 00:19:07.908 Extended LBA Formats Supported: Not Supported 00:19:07.908 Flexible Data Placement Supported: Not Supported 00:19:07.908 00:19:07.908 Controller Memory Buffer Support 00:19:07.908 ================================ 00:19:07.908 Supported: No 00:19:07.908 00:19:07.908 Persistent Memory Region Support 00:19:07.908 ================================ 00:19:07.908 Supported: No 00:19:07.908 00:19:07.908 Admin Command Set Attributes 00:19:07.908 ============================ 00:19:07.908 Security Send/Receive: Not Supported 00:19:07.908 Format NVM: Not Supported 00:19:07.908 Firmware Activate/Download: Not Supported 00:19:07.908 Namespace Management: Not Supported 00:19:07.908 Device Self-Test: Not Supported 00:19:07.908 Directives: Not Supported 00:19:07.908 NVMe-MI: Not Supported 00:19:07.908 Virtualization Management: Not Supported 00:19:07.908 Doorbell Buffer Config: Not Supported 00:19:07.908 Get LBA Status Capability: Not Supported 00:19:07.908 Command & Feature Lockdown Capability: Not Supported 00:19:07.908 Abort Command Limit: 4 00:19:07.908 Async Event Request Limit: 4 00:19:07.908 Number of Firmware Slots: N/A 00:19:07.908 Firmware Slot 1 Read-Only: N/A 00:19:07.908 Firmware Activation Without Reset: N/A 00:19:07.908 Multiple Update Detection Support: N/A 00:19:07.908 Firmware Update Granularity: No Information Provided 00:19:07.908 Per-Namespace SMART Log: Yes 00:19:07.908 Asymmetric Namespace Access Log Page: Supported 00:19:07.908 ANA Transition Time : 10 sec 00:19:07.908 00:19:07.908 Asymmetric Namespace Access Capabilities 00:19:07.908 ANA Optimized State : Supported 00:19:07.908 ANA Non-Optimized State : Supported 00:19:07.908 ANA Inaccessible State : Supported 00:19:07.908 ANA Persistent Loss State : Supported 00:19:07.908 ANA Change State : Supported 00:19:07.908 ANAGRPID is not changed : No 00:19:07.908 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:07.908 00:19:07.908 ANA Group Identifier Maximum : 128 00:19:07.908 Number of ANA Group Identifiers : 128 00:19:07.908 Max Number of Allowed Namespaces : 1024 00:19:07.908 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:07.908 Command Effects Log Page: Supported 00:19:07.908 Get Log Page Extended Data: Supported 00:19:07.908 Telemetry Log Pages: Not Supported 00:19:07.908 Persistent Event Log Pages: Not Supported 00:19:07.908 Supported Log Pages Log Page: May Support 00:19:07.908 Commands Supported & Effects Log Page: Not Supported 00:19:07.908 Feature Identifiers & Effects Log Page:May Support 00:19:07.908 NVMe-MI Commands & Effects Log Page: May Support 00:19:07.908 Data Area 4 for Telemetry Log: Not Supported 00:19:07.908 Error Log Page Entries Supported: 128 00:19:07.908 Keep Alive: Supported 00:19:07.908 Keep Alive Granularity: 1000 ms 00:19:07.908 00:19:07.908 NVM Command Set Attributes 00:19:07.908 ========================== 00:19:07.908 Submission Queue Entry Size 00:19:07.908 Max: 64 00:19:07.908 Min: 64 00:19:07.908 Completion Queue Entry Size 00:19:07.908 Max: 16 00:19:07.908 Min: 16 00:19:07.908 Number of Namespaces: 1024 00:19:07.908 Compare Command: Not Supported 00:19:07.908 Write Uncorrectable Command: Not Supported 00:19:07.908 Dataset Management Command: Supported 00:19:07.908 Write Zeroes Command: Supported 00:19:07.908 Set Features Save Field: Not Supported 00:19:07.908 Reservations: Not Supported 00:19:07.908 Timestamp: Not Supported 00:19:07.908 Copy: Not Supported 00:19:07.908 Volatile Write Cache: Present 00:19:07.908 Atomic Write Unit (Normal): 1 00:19:07.908 Atomic Write Unit (PFail): 1 00:19:07.908 Atomic Compare & Write Unit: 1 00:19:07.908 Fused Compare & Write: Not Supported 00:19:07.908 Scatter-Gather List 00:19:07.908 SGL Command Set: Supported 00:19:07.908 SGL Keyed: Not Supported 00:19:07.908 SGL Bit Bucket Descriptor: Not Supported 00:19:07.908 SGL Metadata Pointer: Not Supported 00:19:07.908 Oversized SGL: Not Supported 00:19:07.908 SGL Metadata Address: Not Supported 00:19:07.908 SGL Offset: Supported 00:19:07.908 Transport SGL Data Block: Not Supported 00:19:07.908 Replay Protected Memory Block: Not Supported 00:19:07.908 00:19:07.908 Firmware Slot Information 00:19:07.908 ========================= 00:19:07.908 Active slot: 0 00:19:07.908 00:19:07.908 Asymmetric Namespace Access 00:19:07.908 =========================== 00:19:07.908 Change Count : 0 00:19:07.908 Number of ANA Group Descriptors : 1 00:19:07.908 ANA Group Descriptor : 0 00:19:07.908 ANA Group ID : 1 00:19:07.908 Number of NSID Values : 1 00:19:07.908 Change Count : 0 00:19:07.908 ANA State : 1 00:19:07.908 Namespace Identifier : 1 00:19:07.908 00:19:07.908 Commands Supported and Effects 00:19:07.908 ============================== 00:19:07.908 Admin Commands 00:19:07.908 -------------- 00:19:07.908 Get Log Page (02h): Supported 00:19:07.908 Identify (06h): Supported 00:19:07.908 Abort (08h): Supported 00:19:07.908 Set Features (09h): Supported 00:19:07.908 Get Features (0Ah): Supported 00:19:07.908 Asynchronous Event Request (0Ch): Supported 00:19:07.908 Keep Alive (18h): Supported 00:19:07.908 I/O Commands 00:19:07.908 ------------ 00:19:07.908 Flush (00h): Supported 00:19:07.908 Write (01h): Supported LBA-Change 00:19:07.908 Read (02h): Supported 00:19:07.908 Write Zeroes (08h): Supported LBA-Change 00:19:07.908 Dataset Management (09h): Supported 00:19:07.908 00:19:07.908 Error Log 00:19:07.908 ========= 00:19:07.908 Entry: 0 00:19:07.908 Error Count: 0x3 00:19:07.908 Submission Queue Id: 0x0 00:19:07.908 Command Id: 0x5 00:19:07.908 Phase Bit: 0 00:19:07.908 Status Code: 0x2 00:19:07.908 Status Code Type: 0x0 00:19:07.908 Do Not Retry: 1 00:19:07.908 Error Location: 0x28 00:19:07.908 LBA: 0x0 00:19:07.908 Namespace: 0x0 00:19:07.908 Vendor Log Page: 0x0 00:19:07.908 ----------- 00:19:07.908 Entry: 1 00:19:07.908 Error Count: 0x2 00:19:07.908 Submission Queue Id: 0x0 00:19:07.908 Command Id: 0x5 00:19:07.908 Phase Bit: 0 00:19:07.908 Status Code: 0x2 00:19:07.908 Status Code Type: 0x0 00:19:07.908 Do Not Retry: 1 00:19:07.908 Error Location: 0x28 00:19:07.908 LBA: 0x0 00:19:07.908 Namespace: 0x0 00:19:07.908 Vendor Log Page: 0x0 00:19:07.908 ----------- 00:19:07.908 Entry: 2 00:19:07.908 Error Count: 0x1 00:19:07.908 Submission Queue Id: 0x0 00:19:07.908 Command Id: 0x4 00:19:07.908 Phase Bit: 0 00:19:07.908 Status Code: 0x2 00:19:07.908 Status Code Type: 0x0 00:19:07.908 Do Not Retry: 1 00:19:07.908 Error Location: 0x28 00:19:07.908 LBA: 0x0 00:19:07.908 Namespace: 0x0 00:19:07.908 Vendor Log Page: 0x0 00:19:07.908 00:19:07.908 Number of Queues 00:19:07.908 ================ 00:19:07.908 Number of I/O Submission Queues: 128 00:19:07.908 Number of I/O Completion Queues: 128 00:19:07.908 00:19:07.908 ZNS Specific Controller Data 00:19:07.908 ============================ 00:19:07.908 Zone Append Size Limit: 0 00:19:07.908 00:19:07.908 00:19:07.908 Active Namespaces 00:19:07.908 ================= 00:19:07.908 get_feature(0x05) failed 00:19:07.908 Namespace ID:1 00:19:07.908 Command Set Identifier: NVM (00h) 00:19:07.908 Deallocate: Supported 00:19:07.908 Deallocated/Unwritten Error: Not Supported 00:19:07.908 Deallocated Read Value: Unknown 00:19:07.908 Deallocate in Write Zeroes: Not Supported 00:19:07.908 Deallocated Guard Field: 0xFFFF 00:19:07.908 Flush: Supported 00:19:07.908 Reservation: Not Supported 00:19:07.908 Namespace Sharing Capabilities: Multiple Controllers 00:19:07.908 Size (in LBAs): 1310720 (5GiB) 00:19:07.909 Capacity (in LBAs): 1310720 (5GiB) 00:19:07.909 Utilization (in LBAs): 1310720 (5GiB) 00:19:07.909 UUID: 23cd6889-24df-475c-b460-43afead5fbbb 00:19:07.909 Thin Provisioning: Not Supported 00:19:07.909 Per-NS Atomic Units: Yes 00:19:07.909 Atomic Boundary Size (Normal): 0 00:19:07.909 Atomic Boundary Size (PFail): 0 00:19:07.909 Atomic Boundary Offset: 0 00:19:07.909 NGUID/EUI64 Never Reused: No 00:19:07.909 ANA group ID: 1 00:19:07.909 Namespace Write Protected: No 00:19:07.909 Number of LBA Formats: 1 00:19:07.909 Current LBA Format: LBA Format #00 00:19:07.909 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:07.909 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.909 rmmod nvme_tcp 00:19:07.909 rmmod nvme_fabrics 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.909 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.177 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:08.177 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:08.177 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:08.177 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:19:08.177 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:08.177 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:08.177 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:08.177 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:08.177 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:08.177 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:08.177 08:59:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:08.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:08.749 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:09.009 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:09.009 ************************************ 00:19:09.009 END TEST nvmf_identify_kernel_target 00:19:09.009 ************************************ 00:19:09.009 00:19:09.009 real 0m2.837s 00:19:09.009 user 0m0.988s 00:19:09.009 sys 0m1.304s 00:19:09.009 08:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:09.009 08:59:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.009 08:59:25 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:09.009 08:59:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:09.009 08:59:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:09.009 08:59:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:09.009 ************************************ 00:19:09.009 START TEST nvmf_auth 00:19:09.009 ************************************ 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:09.009 * Looking for test storage... 00:19:09.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.009 08:59:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:09.269 Cannot find device "nvmf_tgt_br" 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@155 -- # true 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.269 Cannot find device "nvmf_tgt_br2" 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@156 -- # true 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:09.269 Cannot find device "nvmf_tgt_br" 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@158 -- # true 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:09.269 Cannot find device "nvmf_tgt_br2" 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@159 -- # true 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@162 -- # true 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:09.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@163 -- # true 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:09.269 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:09.270 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:09.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:19:09.531 00:19:09.531 --- 10.0.0.2 ping statistics --- 00:19:09.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.531 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:09.531 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:09.531 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:19:09.531 00:19:09.531 --- 10.0.0.3 ping statistics --- 00:19:09.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.531 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:09.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:19:09.531 00:19:09.531 --- 10.0.0.1 ping statistics --- 00:19:09.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.531 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@433 -- # return 0 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=85242 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 85242 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 85242 ']' 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:09.531 08:59:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:10.466 08:59:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:10.466 08:59:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:19:10.466 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:10.466 08:59:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.466 08:59:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:10.725 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.725 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:10.725 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:19:10.725 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=cc075da4883aa0ec7c0fb1733979286c 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.Vti 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key cc075da4883aa0ec7c0fb1733979286c 0 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 cc075da4883aa0ec7c0fb1733979286c 0 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=cc075da4883aa0ec7c0fb1733979286c 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.Vti 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.Vti 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.Vti 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=52ac942fc9e233963aa32df12e996a6385cb2c60c2d7be07ed8b935efadc9316 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.mgD 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 52ac942fc9e233963aa32df12e996a6385cb2c60c2d7be07ed8b935efadc9316 3 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 52ac942fc9e233963aa32df12e996a6385cb2c60c2d7be07ed8b935efadc9316 3 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=52ac942fc9e233963aa32df12e996a6385cb2c60c2d7be07ed8b935efadc9316 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.mgD 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.mgD 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.mgD 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=42c785a1d1dc6484b3e92016567ab39a0dc15ee64aad19fe 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.Yvu 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 42c785a1d1dc6484b3e92016567ab39a0dc15ee64aad19fe 0 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 42c785a1d1dc6484b3e92016567ab39a0dc15ee64aad19fe 0 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=42c785a1d1dc6484b3e92016567ab39a0dc15ee64aad19fe 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.Yvu 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.Yvu 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.Yvu 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=dde157f0339b5be2668ce5f8c49242a635f9103ac85a3169 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.zDa 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key dde157f0339b5be2668ce5f8c49242a635f9103ac85a3169 2 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 dde157f0339b5be2668ce5f8c49242a635f9103ac85a3169 2 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=dde157f0339b5be2668ce5f8c49242a635f9103ac85a3169 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:19:10.726 08:59:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:19:10.985 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.zDa 00:19:10.985 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.zDa 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.zDa 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=2ab37f95e9c9cc7d077ff6e507625fca 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.Utc 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 2ab37f95e9c9cc7d077ff6e507625fca 1 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 2ab37f95e9c9cc7d077ff6e507625fca 1 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=2ab37f95e9c9cc7d077ff6e507625fca 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.Utc 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.Utc 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.Utc 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=d41dcb5b2625329b8e76b1d393d2490f 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.OH5 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key d41dcb5b2625329b8e76b1d393d2490f 1 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 d41dcb5b2625329b8e76b1d393d2490f 1 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=d41dcb5b2625329b8e76b1d393d2490f 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.OH5 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.OH5 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.OH5 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=a6db9c0535836f67e05b78ec7824a917ceb57e2579cbaf01 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.S4N 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key a6db9c0535836f67e05b78ec7824a917ceb57e2579cbaf01 2 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 a6db9c0535836f67e05b78ec7824a917ceb57e2579cbaf01 2 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=a6db9c0535836f67e05b78ec7824a917ceb57e2579cbaf01 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:19:10.986 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.S4N 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.S4N 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.S4N 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=72c8d53fc1a1a95e21d084ce5bc1e881 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.9Ll 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 72c8d53fc1a1a95e21d084ce5bc1e881 0 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 72c8d53fc1a1a95e21d084ce5bc1e881 0 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=72c8d53fc1a1a95e21d084ce5bc1e881 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.9Ll 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.9Ll 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.9Ll 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=e9aebc423fc7712a5be6efb4753e168c47efabd929191f2c040dc06e2b063e21 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.8VG 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key e9aebc423fc7712a5be6efb4753e168c47efabd929191f2c040dc06e2b063e21 3 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 e9aebc423fc7712a5be6efb4753e168c47efabd929191f2c040dc06e2b063e21 3 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=e9aebc423fc7712a5be6efb4753e168c47efabd929191f2c040dc06e2b063e21 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.8VG 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.8VG 00:19:11.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.8VG 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 85242 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 85242 ']' 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:11.245 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:11.512 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:11.512 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:19:11.512 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Vti 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.mgD ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mgD 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Yvu 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.zDa ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zDa 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Utc 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.OH5 ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OH5 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.S4N 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.9Ll ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.9Ll 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.8VG 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.513 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:11.771 08:59:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:12.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:12.029 Waiting for block devices as requested 00:19:12.029 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:12.029 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:12.595 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:12.595 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:12.595 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:12.595 08:59:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:19:12.595 08:59:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:12.595 08:59:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:12.595 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:12.596 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:12.596 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:12.855 No valid GPT data, bailing 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:12.855 No valid GPT data, bailing 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:12.855 No valid GPT data, bailing 00:19:12.855 08:59:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:12.855 No valid GPT data, bailing 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:12.855 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -a 10.0.0.1 -t tcp -s 4420 00:19:13.114 00:19:13.114 Discovery Log Number of Records 2, Generation counter 2 00:19:13.114 =====Discovery Log Entry 0====== 00:19:13.114 trtype: tcp 00:19:13.114 adrfam: ipv4 00:19:13.114 subtype: current discovery subsystem 00:19:13.114 treq: not specified, sq flow control disable supported 00:19:13.114 portid: 1 00:19:13.114 trsvcid: 4420 00:19:13.114 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:13.114 traddr: 10.0.0.1 00:19:13.114 eflags: none 00:19:13.114 sectype: none 00:19:13.114 =====Discovery Log Entry 1====== 00:19:13.114 trtype: tcp 00:19:13.114 adrfam: ipv4 00:19:13.114 subtype: nvme subsystem 00:19:13.114 treq: not specified, sq flow control disable supported 00:19:13.114 portid: 1 00:19:13.114 trsvcid: 4420 00:19:13.114 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:13.114 traddr: 10.0.0.1 00:19:13.114 eflags: none 00:19:13.114 sectype: none 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.114 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.373 nvme0n1 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.373 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.632 nvme0n1 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.632 nvme0n1 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.632 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.633 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.633 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:13.633 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.892 08:59:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.892 nvme0n1 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.892 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.175 nvme0n1 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.175 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.447 nvme0n1 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.447 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.706 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.965 nvme0n1 00:19:14.965 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.965 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.966 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.966 08:59:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.966 08:59:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.966 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.225 nvme0n1 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:15.225 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.226 nvme0n1 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.226 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.485 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.486 nvme0n1 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.486 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.745 nvme0n1 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.745 08:59:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:16.682 nvme0n1 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:16.682 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.683 08:59:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:16.941 nvme0n1 00:19:16.941 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.941 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.941 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.941 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:16.941 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:16.941 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.941 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.941 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.941 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.941 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.198 nvme0n1 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:17.198 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.456 nvme0n1 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.456 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.714 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.715 nvme0n1 00:19:17.715 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.980 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:17.980 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.980 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.980 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.980 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.980 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.980 08:59:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.980 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.980 08:59:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.980 08:59:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.911 08:59:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:19.911 nvme0n1 00:19:19.911 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.911 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.911 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.911 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:19.911 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:19.911 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.168 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:20.425 nvme0n1 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.425 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:20.991 nvme0n1 00:19:20.991 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.991 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.991 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.991 08:59:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:20.991 08:59:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:20.991 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.992 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:20.992 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:20.992 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:20.992 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:20.992 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.992 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:21.250 nvme0n1 00:19:21.250 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.250 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.250 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:21.250 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.250 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:21.250 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.250 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.250 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.250 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.250 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.508 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:21.766 nvme0n1 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.766 08:59:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:25.970 08:59:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.971 08:59:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:26.537 nvme0n1 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.537 08:59:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:27.466 nvme0n1 00:19:27.466 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.466 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.466 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.467 08:59:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:28.032 nvme0n1 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.032 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.033 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:28.968 nvme0n1 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.968 08:59:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.535 nvme0n1 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.535 nvme0n1 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:29.535 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.795 nvme0n1 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:29.795 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.796 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.796 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:29.796 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.796 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:29.796 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:29.796 08:59:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:29.796 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.796 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.796 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.054 nvme0n1 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:30.054 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.055 nvme0n1 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.055 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:30.313 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.314 nvme0n1 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.314 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.573 nvme0n1 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.573 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.833 nvme0n1 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:30.833 08:59:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.834 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.834 08:59:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.834 nvme0n1 00:19:30.834 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.834 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.834 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:30.834 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.834 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:30.834 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.092 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.093 nvme0n1 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.093 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.352 nvme0n1 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:31.352 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.353 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.611 nvme0n1 00:19:31.611 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.611 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.611 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.611 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.611 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:31.611 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.611 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.611 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.612 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.870 nvme0n1 00:19:31.870 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.870 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.870 08:59:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:31.870 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.870 08:59:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:31.870 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.871 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.130 nvme0n1 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.130 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.400 nvme0n1 00:19:32.400 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.400 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.400 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.400 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.400 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:32.400 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.401 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.674 nvme0n1 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:32.674 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.675 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:32.675 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:32.675 08:59:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:32.675 08:59:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.675 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.675 08:59:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:33.250 nvme0n1 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.250 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:33.509 nvme0n1 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:33.509 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.510 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:33.510 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:33.510 08:59:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:33.510 08:59:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.510 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.510 08:59:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.076 nvme0n1 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:34.076 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.077 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.335 nvme0n1 00:19:34.335 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.335 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:34.335 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.335 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.335 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.594 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.595 08:59:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.853 nvme0n1 00:19:34.853 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.853 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.853 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:34.854 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.112 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:35.679 nvme0n1 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.679 08:59:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:36.246 nvme0n1 00:19:36.246 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.246 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:36.246 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.246 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.246 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:36.246 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.246 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.246 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.246 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.246 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:36.246 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.504 08:59:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:37.071 nvme0n1 00:19:37.071 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.071 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.071 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:37.071 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.071 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:37.071 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.072 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:37.640 nvme0n1 00:19:37.640 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.640 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.640 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.640 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:37.640 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:37.640 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.899 08:59:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.468 nvme0n1 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.468 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:38.469 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:38.469 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:38.469 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.469 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.469 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.728 nvme0n1 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.728 nvme0n1 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.728 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.729 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.729 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.729 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.729 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.988 08:59:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.988 nvme0n1 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:38.988 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:38.989 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:38.989 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.989 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.989 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:38.989 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.989 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:38.989 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:38.989 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:38.989 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:38.989 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.989 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.248 nvme0n1 00:19:39.248 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.302 nvme0n1 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:39.302 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.303 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.303 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.303 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.303 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.303 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.562 nvme0n1 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.562 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.821 nvme0n1 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:39.821 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.822 08:59:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:39.822 nvme0n1 00:19:39.822 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.822 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.822 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:39.822 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.822 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.081 nvme0n1 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:19:40.081 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.340 nvme0n1 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:40.340 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.341 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.599 nvme0n1 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:40.599 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.600 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.858 nvme0n1 00:19:40.858 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.858 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.858 08:59:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:40.858 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.858 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.858 08:59:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:40.858 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.859 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:40.859 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:40.859 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:40.859 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.859 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.859 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.118 nvme0n1 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.118 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.377 nvme0n1 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.377 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.636 nvme0n1 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:41.636 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.637 08:59:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.207 nvme0n1 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.207 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.465 nvme0n1 00:19:42.465 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.465 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.465 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.465 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.465 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:42.465 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.465 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.465 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.723 08:59:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.982 nvme0n1 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.982 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:43.549 nvme0n1 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.549 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:43.809 nvme0n1 00:19:43.809 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.809 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.809 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.809 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:43.809 08:59:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:43.809 08:59:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:Y2MwNzVkYTQ4ODNhYTBlYzdjMGZiMTczMzk3OTI4NmOmIPvx: 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: ]] 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NTJhYzk0MmZjOWUyMzM5NjNhYTMyZGYxMmU5OTZhNjM4NWNiMmM2MGMyZDdiZTA3ZWQ4YjkzNWVmYWRjOTMxNsaVglM=: 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.809 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:44.744 nvme0n1 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.744 09:00:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:45.310 nvme0n1 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MmFiMzdmOTVlOWM5Y2M3ZDA3N2ZmNmU1MDc2MjVmY2ErNgs9: 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: ]] 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZGNiNWIyNjI1MzI5YjhlNzZiMWQzOTNkMjQ5MGYn/6WK: 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:45.310 09:00:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.311 09:00:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:45.875 nvme0n1 00:19:45.875 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.875 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:45.875 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.875 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.875 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:45.875 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:YTZkYjljMDUzNTgzNmY2N2UwNWI3OGVjNzgyNGE5MTdjZWI1N2UyNTc5Y2JhZjAxkh1bpg==: 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: ]] 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NzJjOGQ1M2ZjMWExYTk1ZTIxZDA4NGNlNWJjMWU4ODF/970d: 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:46.133 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:46.134 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:46.134 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.134 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:46.698 nvme0n1 00:19:46.698 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.698 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTlhZWJjNDIzZmM3NzEyYTViZTZlZmI0NzUzZTE2OGM0N2VmYWJkOTI5MTkxZjJjMDQwZGMwNmUyYjA2M2UyMe7X8qY=: 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.699 09:00:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:47.634 nvme0n1 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjNzg1YTFkMWRjNjQ4NGIzZTkyMDE2NTY3YWIzOWEwZGMxNWVlNjRhYWQxOWZltVaNnw==: 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: ]] 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRlMTU3ZjAzMzliNWJlMjY2OGNlNWY4YzQ5MjQyYTYzNWY5MTAzYWM4NWEzMTY5WBZYuQ==: 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:47.634 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:47.635 2024/05/15 09:00:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:19:47.635 request: 00:19:47.635 { 00:19:47.635 "method": "bdev_nvme_attach_controller", 00:19:47.635 "params": { 00:19:47.635 "name": "nvme0", 00:19:47.635 "trtype": "tcp", 00:19:47.635 "traddr": "10.0.0.1", 00:19:47.635 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:47.635 "adrfam": "ipv4", 00:19:47.635 "trsvcid": "4420", 00:19:47.635 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:19:47.635 } 00:19:47.635 } 00:19:47.635 Got JSON-RPC error response 00:19:47.635 GoRPCClient: error on JSON-RPC call 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:47.635 2024/05/15 09:00:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:19:47.635 request: 00:19:47.635 { 00:19:47.635 "method": "bdev_nvme_attach_controller", 00:19:47.635 "params": { 00:19:47.635 "name": "nvme0", 00:19:47.635 "trtype": "tcp", 00:19:47.635 "traddr": "10.0.0.1", 00:19:47.635 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:47.635 "adrfam": "ipv4", 00:19:47.635 "trsvcid": "4420", 00:19:47.635 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:47.635 "dhchap_key": "key2" 00:19:47.635 } 00:19:47.635 } 00:19:47.635 Got JSON-RPC error response 00:19:47.635 GoRPCClient: error on JSON-RPC call 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:47.635 2024/05/15 09:00:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:19:47.635 request: 00:19:47.635 { 00:19:47.635 "method": "bdev_nvme_attach_controller", 00:19:47.635 "params": { 00:19:47.635 "name": "nvme0", 00:19:47.635 "trtype": "tcp", 00:19:47.635 "traddr": "10.0.0.1", 00:19:47.635 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:47.635 "adrfam": "ipv4", 00:19:47.635 "trsvcid": "4420", 00:19:47.635 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:47.635 "dhchap_key": "key1", 00:19:47.635 "dhchap_ctrlr_key": "ckey2" 00:19:47.635 } 00:19:47.635 } 00:19:47.635 Got JSON-RPC error response 00:19:47.635 GoRPCClient: error on JSON-RPC call 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.635 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.636 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.636 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:19:47.636 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:19:47.636 09:00:03 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:19:47.636 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:47.636 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:19:47.636 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.636 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:19:47.636 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.636 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.636 rmmod nvme_tcp 00:19:47.894 rmmod nvme_fabrics 00:19:47.894 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.894 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:19:47.894 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:19:47.894 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 85242 ']' 00:19:47.895 09:00:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 85242 00:19:47.895 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 85242 ']' 00:19:47.895 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 85242 00:19:47.895 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:19:47.895 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:47.895 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85242 00:19:47.895 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:47.895 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:47.895 killing process with pid 85242 00:19:47.895 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85242' 00:19:47.895 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 85242 00:19:47.895 09:00:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 85242 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:47.895 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:19:48.154 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:48.154 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:48.154 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:48.154 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:48.154 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:48.154 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:48.154 09:00:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:48.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.728 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:49.017 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:49.017 09:00:05 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Vti /tmp/spdk.key-null.Yvu /tmp/spdk.key-sha256.Utc /tmp/spdk.key-sha384.S4N /tmp/spdk.key-sha512.8VG /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:49.017 09:00:05 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:49.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:49.285 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:49.285 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:49.285 00:19:49.285 real 0m40.271s 00:19:49.285 user 0m36.458s 00:19:49.285 sys 0m3.497s 00:19:49.285 09:00:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:49.285 09:00:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:19:49.285 ************************************ 00:19:49.285 END TEST nvmf_auth 00:19:49.285 ************************************ 00:19:49.285 09:00:05 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:19:49.285 09:00:05 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:49.285 09:00:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:49.285 09:00:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:49.285 09:00:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:49.286 ************************************ 00:19:49.286 START TEST nvmf_digest 00:19:49.286 ************************************ 00:19:49.286 09:00:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:49.545 * Looking for test storage... 00:19:49.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:49.545 Cannot find device "nvmf_tgt_br" 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:49.545 Cannot find device "nvmf_tgt_br2" 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:49.545 Cannot find device "nvmf_tgt_br" 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:49.545 Cannot find device "nvmf_tgt_br2" 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:49.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.545 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:49.546 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:49.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.546 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:49.546 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:49.546 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:49.546 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:49.546 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:49.546 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:49.546 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:49.546 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:49.546 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:49.546 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:49.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:19:49.804 00:19:49.804 --- 10.0.0.2 ping statistics --- 00:19:49.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.804 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:49.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:49.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:19:49.804 00:19:49.804 --- 10.0.0.3 ping statistics --- 00:19:49.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.804 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:49.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:49.804 00:19:49.804 --- 10.0.0.1 ping statistics --- 00:19:49.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.804 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:49.804 ************************************ 00:19:49.804 START TEST nvmf_digest_clean 00:19:49.804 ************************************ 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=86916 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 86916 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 86916 ']' 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:49.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:49.804 09:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:49.804 [2024-05-15 09:00:05.997611] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:19:49.804 [2024-05-15 09:00:05.997697] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.062 [2024-05-15 09:00:06.135087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.062 [2024-05-15 09:00:06.193843] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.062 [2024-05-15 09:00:06.193892] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.062 [2024-05-15 09:00:06.193905] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.062 [2024-05-15 09:00:06.193913] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.062 [2024-05-15 09:00:06.193921] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.062 [2024-05-15 09:00:06.193948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.062 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:50.062 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:19:50.062 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.062 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.062 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:50.320 null0 00:19:50.320 [2024-05-15 09:00:06.390543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.320 [2024-05-15 09:00:06.414469] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:50.320 [2024-05-15 09:00:06.414723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86949 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86949 /var/tmp/bperf.sock 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 86949 ']' 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:50.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:50.320 09:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:50.320 [2024-05-15 09:00:06.477945] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:19:50.320 [2024-05-15 09:00:06.478034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86949 ] 00:19:50.578 [2024-05-15 09:00:06.618358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.578 [2024-05-15 09:00:06.690324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.512 09:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:51.512 09:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:19:51.512 09:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:51.512 09:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:51.512 09:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:51.769 09:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:51.769 09:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:52.027 nvme0n1 00:19:52.027 09:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:52.027 09:00:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:52.285 Running I/O for 2 seconds... 00:19:54.182 00:19:54.182 Latency(us) 00:19:54.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.182 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:54.182 nvme0n1 : 2.00 17716.75 69.21 0.00 0.00 7215.81 3664.06 14358.34 00:19:54.182 =================================================================================================================== 00:19:54.182 Total : 17716.75 69.21 0.00 0.00 7215.81 3664.06 14358.34 00:19:54.182 0 00:19:54.182 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:54.182 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:54.182 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:54.182 | select(.opcode=="crc32c") 00:19:54.182 | "\(.module_name) \(.executed)"' 00:19:54.182 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:54.182 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86949 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 86949 ']' 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 86949 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86949 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86949' 00:19:54.439 killing process with pid 86949 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 86949 00:19:54.439 Received shutdown signal, test time was about 2.000000 seconds 00:19:54.439 00:19:54.439 Latency(us) 00:19:54.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.439 =================================================================================================================== 00:19:54.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.439 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 86949 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87044 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87044 /var/tmp/bperf.sock 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 87044 ']' 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:54.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:54.697 09:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:54.697 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:54.697 Zero copy mechanism will not be used. 00:19:54.697 [2024-05-15 09:00:10.891168] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:19:54.697 [2024-05-15 09:00:10.891261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87044 ] 00:19:54.954 [2024-05-15 09:00:11.028655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.954 [2024-05-15 09:00:11.099058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.886 09:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:55.886 09:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:19:55.886 09:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:55.886 09:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:55.887 09:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:56.143 09:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:56.143 09:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:56.707 nvme0n1 00:19:56.707 09:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:56.707 09:00:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:56.707 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:56.707 Zero copy mechanism will not be used. 00:19:56.707 Running I/O for 2 seconds... 00:19:58.612 00:19:58.612 Latency(us) 00:19:58.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.612 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:58.612 nvme0n1 : 2.00 7688.81 961.10 0.00 0.00 2076.34 647.91 7328.12 00:19:58.612 =================================================================================================================== 00:19:58.612 Total : 7688.81 961.10 0.00 0.00 2076.34 647.91 7328.12 00:19:58.612 0 00:19:58.612 09:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:58.612 09:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:58.612 09:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:58.612 09:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:58.612 09:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:58.612 | select(.opcode=="crc32c") 00:19:58.612 | "\(.module_name) \(.executed)"' 00:19:58.870 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:58.870 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:58.870 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:58.870 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:58.870 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87044 00:19:58.870 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 87044 ']' 00:19:58.870 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 87044 00:19:58.870 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:19:58.870 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.870 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87044 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:59.128 killing process with pid 87044 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87044' 00:19:59.128 Received shutdown signal, test time was about 2.000000 seconds 00:19:59.128 00:19:59.128 Latency(us) 00:19:59.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.128 =================================================================================================================== 00:19:59.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 87044 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 87044 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87130 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87130 /var/tmp/bperf.sock 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 87130 ']' 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:59.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:59.128 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:59.128 [2024-05-15 09:00:15.360134] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:19:59.128 [2024-05-15 09:00:15.360215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87130 ] 00:19:59.400 [2024-05-15 09:00:15.493161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.400 [2024-05-15 09:00:15.556861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.400 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:59.400 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:19:59.400 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:59.400 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:59.400 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:59.965 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:59.965 09:00:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:00.223 nvme0n1 00:20:00.223 09:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:00.223 09:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:00.223 Running I/O for 2 seconds... 00:20:02.749 00:20:02.749 Latency(us) 00:20:02.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.749 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:02.749 nvme0n1 : 2.01 21188.82 82.77 0.00 0.00 6032.83 2561.86 11021.96 00:20:02.749 =================================================================================================================== 00:20:02.749 Total : 21188.82 82.77 0.00 0.00 6032.83 2561.86 11021.96 00:20:02.749 0 00:20:02.749 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:02.749 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:02.749 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:02.749 | select(.opcode=="crc32c") 00:20:02.749 | "\(.module_name) \(.executed)"' 00:20:02.749 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:02.749 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:02.749 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:02.749 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:02.749 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:02.749 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87130 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 87130 ']' 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 87130 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87130 00:20:02.750 killing process with pid 87130 00:20:02.750 Received shutdown signal, test time was about 2.000000 seconds 00:20:02.750 00:20:02.750 Latency(us) 00:20:02.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.750 =================================================================================================================== 00:20:02.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87130' 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 87130 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 87130 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87201 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87201 /var/tmp/bperf.sock 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 87201 ']' 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:02.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:02.750 09:00:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:02.750 [2024-05-15 09:00:18.938240] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:20:02.750 [2024-05-15 09:00:18.938539] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87201 ] 00:20:02.750 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:02.750 Zero copy mechanism will not be used. 00:20:03.008 [2024-05-15 09:00:19.086225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.008 [2024-05-15 09:00:19.146121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.942 09:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:03.942 09:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:20:03.942 09:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:03.942 09:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:03.942 09:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:03.942 09:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:03.942 09:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:04.508 nvme0n1 00:20:04.508 09:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:04.509 09:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:04.509 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:04.509 Zero copy mechanism will not be used. 00:20:04.509 Running I/O for 2 seconds... 00:20:06.408 00:20:06.408 Latency(us) 00:20:06.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.408 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:06.408 nvme0n1 : 2.00 6677.04 834.63 0.00 0.00 2390.06 1906.50 4349.21 00:20:06.408 =================================================================================================================== 00:20:06.408 Total : 6677.04 834.63 0.00 0.00 2390.06 1906.50 4349.21 00:20:06.408 0 00:20:06.408 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:06.408 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:06.408 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:06.408 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:06.408 | select(.opcode=="crc32c") 00:20:06.408 | "\(.module_name) \(.executed)"' 00:20:06.408 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87201 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 87201 ']' 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 87201 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87201 00:20:06.666 killing process with pid 87201 00:20:06.666 Received shutdown signal, test time was about 2.000000 seconds 00:20:06.666 00:20:06.666 Latency(us) 00:20:06.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.666 =================================================================================================================== 00:20:06.666 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87201' 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 87201 00:20:06.666 09:00:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 87201 00:20:06.925 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 86916 00:20:06.925 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 86916 ']' 00:20:06.925 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 86916 00:20:06.925 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:20:06.925 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:06.925 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86916 00:20:06.925 killing process with pid 86916 00:20:06.925 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:06.925 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:06.925 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86916' 00:20:06.925 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 86916 00:20:06.925 [2024-05-15 09:00:23.082153] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:06.925 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 86916 00:20:07.183 00:20:07.183 real 0m17.340s 00:20:07.183 user 0m34.014s 00:20:07.183 sys 0m4.346s 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:07.183 ************************************ 00:20:07.183 END TEST nvmf_digest_clean 00:20:07.183 ************************************ 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:07.183 ************************************ 00:20:07.183 START TEST nvmf_digest_error 00:20:07.183 ************************************ 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=87320 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 87320 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87320 ']' 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:07.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:07.183 09:00:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:07.183 [2024-05-15 09:00:23.370225] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:20:07.183 [2024-05-15 09:00:23.370308] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.441 [2024-05-15 09:00:23.508156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.441 [2024-05-15 09:00:23.584017] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.441 [2024-05-15 09:00:23.584088] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.441 [2024-05-15 09:00:23.584103] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.441 [2024-05-15 09:00:23.584114] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.441 [2024-05-15 09:00:23.584123] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.441 [2024-05-15 09:00:23.584151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:08.375 [2024-05-15 09:00:24.388681] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:08.375 null0 00:20:08.375 [2024-05-15 09:00:24.461273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.375 [2024-05-15 09:00:24.485206] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:08.375 [2024-05-15 09:00:24.485452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87364 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87364 /var/tmp/bperf.sock 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87364 ']' 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:08.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:08.375 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:08.376 09:00:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:08.376 [2024-05-15 09:00:24.552385] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:20:08.376 [2024-05-15 09:00:24.552526] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87364 ] 00:20:08.634 [2024-05-15 09:00:24.697717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.634 [2024-05-15 09:00:24.756990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.569 09:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:09.569 09:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:20:09.569 09:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:09.569 09:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:09.569 09:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:09.569 09:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.569 09:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:09.569 09:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.569 09:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:09.569 09:00:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:10.137 nvme0n1 00:20:10.137 09:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:10.137 09:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.137 09:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:10.137 09:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.137 09:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:10.137 09:00:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:10.137 Running I/O for 2 seconds... 00:20:10.137 [2024-05-15 09:00:26.262572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.137 [2024-05-15 09:00:26.262672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.137 [2024-05-15 09:00:26.262689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.137 [2024-05-15 09:00:26.275556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.137 [2024-05-15 09:00:26.275643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.137 [2024-05-15 09:00:26.275659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.137 [2024-05-15 09:00:26.291952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.137 [2024-05-15 09:00:26.292020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.137 [2024-05-15 09:00:26.292051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.137 [2024-05-15 09:00:26.307377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.137 [2024-05-15 09:00:26.307458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.137 [2024-05-15 09:00:26.307490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.137 [2024-05-15 09:00:26.321692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.137 [2024-05-15 09:00:26.321751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.137 [2024-05-15 09:00:26.321766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.137 [2024-05-15 09:00:26.335967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.137 [2024-05-15 09:00:26.336037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.137 [2024-05-15 09:00:26.336052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.137 [2024-05-15 09:00:26.349599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.137 [2024-05-15 09:00:26.349655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.137 [2024-05-15 09:00:26.349686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.137 [2024-05-15 09:00:26.362707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.137 [2024-05-15 09:00:26.362747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.137 [2024-05-15 09:00:26.362762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.377048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.377104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.377134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.392478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.392548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.392589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.405567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.405632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.405662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.418703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.418757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.418787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.434195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.434250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.434280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.448645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.448699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.448714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.463267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.463346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.463377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.476234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.476281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.476297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.491530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.491636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.491653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.506759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.506818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.506848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.519718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.519779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.519794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.533551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.533605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.533621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.548300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.548345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.548360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.561926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.562007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.562023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.576872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.576931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.576962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.592680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.592725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.592740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.605620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.605698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.605723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.397 [2024-05-15 09:00:26.620077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.397 [2024-05-15 09:00:26.620127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.397 [2024-05-15 09:00:26.620141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-05-15 09:00:26.634817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.669 [2024-05-15 09:00:26.634887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-05-15 09:00:26.634903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-05-15 09:00:26.648736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.669 [2024-05-15 09:00:26.648798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-05-15 09:00:26.648828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-05-15 09:00:26.659805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.669 [2024-05-15 09:00:26.659848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-05-15 09:00:26.659863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-05-15 09:00:26.677051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.669 [2024-05-15 09:00:26.677107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-05-15 09:00:26.677137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-05-15 09:00:26.689134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.669 [2024-05-15 09:00:26.689193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-05-15 09:00:26.689208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-05-15 09:00:26.704822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.669 [2024-05-15 09:00:26.704879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-05-15 09:00:26.704909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-05-15 09:00:26.721025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.669 [2024-05-15 09:00:26.721104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-05-15 09:00:26.721119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-05-15 09:00:26.735774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.669 [2024-05-15 09:00:26.735820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.669 [2024-05-15 09:00:26.735836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.669 [2024-05-15 09:00:26.749677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.669 [2024-05-15 09:00:26.749720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-05-15 09:00:26.749735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-05-15 09:00:26.764462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.670 [2024-05-15 09:00:26.764546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-05-15 09:00:26.764575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-05-15 09:00:26.779136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.670 [2024-05-15 09:00:26.779195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-05-15 09:00:26.779225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-05-15 09:00:26.791405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.670 [2024-05-15 09:00:26.791461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-05-15 09:00:26.791491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-05-15 09:00:26.806620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.670 [2024-05-15 09:00:26.806658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-05-15 09:00:26.806688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-05-15 09:00:26.820700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.670 [2024-05-15 09:00:26.820747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-05-15 09:00:26.820762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-05-15 09:00:26.837203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.670 [2024-05-15 09:00:26.837247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-05-15 09:00:26.837263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-05-15 09:00:26.850893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.670 [2024-05-15 09:00:26.850948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-05-15 09:00:26.850968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-05-15 09:00:26.866509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.670 [2024-05-15 09:00:26.866574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-05-15 09:00:26.866590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-05-15 09:00:26.882081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.670 [2024-05-15 09:00:26.882150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-05-15 09:00:26.882180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.670 [2024-05-15 09:00:26.897234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.670 [2024-05-15 09:00:26.897290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.670 [2024-05-15 09:00:26.897306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:26.914681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:26.914737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:26.914753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:26.927718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:26.927772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:26.927789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:26.943394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:26.943445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:26.943460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:26.960061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:26.960139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:26.960155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:26.975208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:26.975271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:26.975286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:26.990842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:26.990896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:26.990911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:27.005978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:27.006043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:27.006058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:27.021068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:27.021124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:27.021140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:27.031897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:27.031945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:27.031960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:27.047694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:27.047756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:27.047772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:27.063670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:27.063727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:27.063742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:27.076863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:27.076920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:27.076951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:27.091098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:27.091155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:27.091171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:27.106890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:27.106948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:27.106963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:27.121851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:27.121896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:27.121911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:27.136212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:27.136258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:27.136272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.928 [2024-05-15 09:00:27.150721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:10.928 [2024-05-15 09:00:27.150763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.928 [2024-05-15 09:00:27.150779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.164797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.164859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.164881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.181076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.181184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.181207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.196803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.196868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.196890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.212768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.212846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.212866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.228720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.228794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.228817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.244032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.244100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.244116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.259793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.259839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.259855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.270450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.270499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.270514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.287103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.287160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.287176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.302255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.302310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.302340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.318539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.318608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.318623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.331313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.331373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.331388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.347107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.347152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.347167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.362093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.362135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.188 [2024-05-15 09:00:27.362150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.188 [2024-05-15 09:00:27.377005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.188 [2024-05-15 09:00:27.377070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-05-15 09:00:27.377086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.189 [2024-05-15 09:00:27.391716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.189 [2024-05-15 09:00:27.391757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-05-15 09:00:27.391771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.189 [2024-05-15 09:00:27.405728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.189 [2024-05-15 09:00:27.405772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-05-15 09:00:27.405787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.189 [2024-05-15 09:00:27.418825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.189 [2024-05-15 09:00:27.418869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.189 [2024-05-15 09:00:27.418884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.448 [2024-05-15 09:00:27.434315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.448 [2024-05-15 09:00:27.434367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.448 [2024-05-15 09:00:27.434383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.448 [2024-05-15 09:00:27.449818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.448 [2024-05-15 09:00:27.449889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.448 [2024-05-15 09:00:27.449905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.448 [2024-05-15 09:00:27.463779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.448 [2024-05-15 09:00:27.463822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.448 [2024-05-15 09:00:27.463837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.448 [2024-05-15 09:00:27.479042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.448 [2024-05-15 09:00:27.479099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.448 [2024-05-15 09:00:27.479114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.448 [2024-05-15 09:00:27.492578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.448 [2024-05-15 09:00:27.492680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.492712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.507452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.507507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.507537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.522503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.522546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.522572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.536827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.536868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.536883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.549378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.549449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.549465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.565470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.565551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.565581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.581679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.581760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.581776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.594381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.594443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.594458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.611750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.611793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.611807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.626964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.627023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.627053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.641204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.641260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.641291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.653720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.653775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.653805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.449 [2024-05-15 09:00:27.670151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.449 [2024-05-15 09:00:27.670232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.449 [2024-05-15 09:00:27.670248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.686828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.686902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.686933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.701578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.701617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.701632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.714836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.714876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.714891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.730712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.730754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.730768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.744047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.744127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.744143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.759176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.759231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.759247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.777495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.777556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.777599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.790664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.790703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.790717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.805714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.805770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.805785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.820713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.820754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.820768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.835230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.835282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.835298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.850531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.850593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.850609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.863311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.863354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.863369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.877739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.877781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.877796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.892003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.892058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.892083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.906303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.906344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.906358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.921049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.921108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.921124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.710 [2024-05-15 09:00:27.936939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.710 [2024-05-15 09:00:27.936992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.710 [2024-05-15 09:00:27.937007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.970 [2024-05-15 09:00:27.948684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.970 [2024-05-15 09:00:27.948742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.970 [2024-05-15 09:00:27.948757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.970 [2024-05-15 09:00:27.965467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.970 [2024-05-15 09:00:27.965529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.970 [2024-05-15 09:00:27.965544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.970 [2024-05-15 09:00:27.979820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.970 [2024-05-15 09:00:27.979860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.970 [2024-05-15 09:00:27.979874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.970 [2024-05-15 09:00:27.994386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:27.994428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:27.994442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.009290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.009368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.009399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.025132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.025183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.025198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.039429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.039492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.039508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.053347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.053392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.053407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.067130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.067176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.067190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.084286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.084332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.084347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.099686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.099728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.099742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.113803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.113845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.113859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.126988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.127053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.127069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.140272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.140320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.140334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.155004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.155048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.155063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.168791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.168834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.168848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.184739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.184784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.184798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:11.971 [2024-05-15 09:00:28.199905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:11.971 [2024-05-15 09:00:28.199951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.971 [2024-05-15 09:00:28.199966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.229 [2024-05-15 09:00:28.214132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:12.229 [2024-05-15 09:00:28.214200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.229 [2024-05-15 09:00:28.214216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.229 [2024-05-15 09:00:28.225842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:12.229 [2024-05-15 09:00:28.225886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.229 [2024-05-15 09:00:28.225900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.229 [2024-05-15 09:00:28.241949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11929d0) 00:20:12.229 [2024-05-15 09:00:28.241995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.229 [2024-05-15 09:00:28.242010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.229 00:20:12.229 Latency(us) 00:20:12.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.229 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:12.229 nvme0n1 : 2.01 17369.21 67.85 0.00 0.00 7358.19 3932.16 20971.52 00:20:12.229 =================================================================================================================== 00:20:12.229 Total : 17369.21 67.85 0.00 0.00 7358.19 3932.16 20971.52 00:20:12.229 0 00:20:12.229 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:12.229 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:12.230 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:12.230 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:12.230 | .driver_specific 00:20:12.230 | .nvme_error 00:20:12.230 | .status_code 00:20:12.230 | .command_transient_transport_error' 00:20:12.489 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:20:12.489 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87364 00:20:12.489 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87364 ']' 00:20:12.489 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87364 00:20:12.489 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:20:12.489 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:12.489 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87364 00:20:12.489 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:12.489 killing process with pid 87364 00:20:12.489 Received shutdown signal, test time was about 2.000000 seconds 00:20:12.489 00:20:12.490 Latency(us) 00:20:12.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.490 =================================================================================================================== 00:20:12.490 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.490 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:12.490 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87364' 00:20:12.490 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87364 00:20:12.490 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87364 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87454 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:12.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87454 /var/tmp/bperf.sock 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87454 ']' 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:12.747 09:00:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:12.747 [2024-05-15 09:00:28.829632] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:20:12.747 [2024-05-15 09:00:28.829742] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87454 ] 00:20:12.747 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:12.747 Zero copy mechanism will not be used. 00:20:12.747 [2024-05-15 09:00:28.964332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.004 [2024-05-15 09:00:29.034230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.004 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:13.004 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:20:13.004 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:13.004 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:13.262 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:13.262 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.262 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:13.262 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.262 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:13.262 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:13.521 nvme0n1 00:20:13.780 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:13.780 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.780 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:13.780 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.780 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:13.780 09:00:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:13.780 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:13.780 Zero copy mechanism will not be used. 00:20:13.780 Running I/O for 2 seconds... 00:20:13.780 [2024-05-15 09:00:29.911088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.911151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.911168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.915899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.915940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.915955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.920805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.920847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.920862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.925256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.925301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.925315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.928633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.928672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.928687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.932632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.932671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.932686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.936495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.936535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.936549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.940476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.940518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.940532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.944514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.944554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.944583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.948441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.948481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.948495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.952350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.952391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.952406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.955860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.955899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.955914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.959500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.959559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.959601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.964340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.964385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.964401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.780 [2024-05-15 09:00:29.968634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.780 [2024-05-15 09:00:29.968676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.780 [2024-05-15 09:00:29.968691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.781 [2024-05-15 09:00:29.972786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.781 [2024-05-15 09:00:29.972828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.781 [2024-05-15 09:00:29.972843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.781 [2024-05-15 09:00:29.976920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.781 [2024-05-15 09:00:29.976962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.781 [2024-05-15 09:00:29.976977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.781 [2024-05-15 09:00:29.980869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.781 [2024-05-15 09:00:29.980911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.781 [2024-05-15 09:00:29.980926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.781 [2024-05-15 09:00:29.984267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.781 [2024-05-15 09:00:29.984324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.781 [2024-05-15 09:00:29.984341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.781 [2024-05-15 09:00:29.989135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.781 [2024-05-15 09:00:29.989180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.781 [2024-05-15 09:00:29.989196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.781 [2024-05-15 09:00:29.992839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.781 [2024-05-15 09:00:29.992885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.781 [2024-05-15 09:00:29.992901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.781 [2024-05-15 09:00:29.997421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.781 [2024-05-15 09:00:29.997465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.781 [2024-05-15 09:00:29.997480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.781 [2024-05-15 09:00:30.003214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.781 [2024-05-15 09:00:30.003269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.781 [2024-05-15 09:00:30.003284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.781 [2024-05-15 09:00:30.007047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.781 [2024-05-15 09:00:30.007091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.781 [2024-05-15 09:00:30.007108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.781 [2024-05-15 09:00:30.011502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:13.781 [2024-05-15 09:00:30.011547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.781 [2024-05-15 09:00:30.011577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.060 [2024-05-15 09:00:30.015643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.015689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.015705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.019225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.019269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.019284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.023026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.023071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.023086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.027092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.027137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.027152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.031532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.031593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.031608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.035789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.035830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.035845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.039753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.039796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.039821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.043647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.043689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.043704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.047435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.047476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.047490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.051816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.051858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.051872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.056593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.056631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.056647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.060288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.060329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.060343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.064023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.064075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.064099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.069023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.069072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.069087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.072714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.072758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.072780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.077359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.077404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.077419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.082613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.082657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.082673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.087222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.087265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.087280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.061 [2024-05-15 09:00:30.090491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.061 [2024-05-15 09:00:30.090533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.061 [2024-05-15 09:00:30.090547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.095317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.095360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.095375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.100510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.100576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.100593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.105078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.105119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.105134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.108211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.108252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.108266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.112441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.112483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.112498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.116995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.117037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.117051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.121835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.121879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.121894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.126308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.126347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.126362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.129408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.129448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.129464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.134122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.134164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.134178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.138126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.138168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.138182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.141898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.141940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.141955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.146232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.146274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.146289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.149808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.149852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.149866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.154000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.154041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.154056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.158697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.158737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.158751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.162127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.162168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.162182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.166828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.166869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.166884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.170884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.170924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.170938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.174653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.174693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.174709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.178806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.178847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.178861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.183887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.183929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.183944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.187264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.187309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.187324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.191711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.191753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.191767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.196620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.196656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.196671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.200677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.200717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.200731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.203479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.203521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.203536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.208030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.208098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.208116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.212500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.212546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.212575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.215894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.215934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.215948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.220271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.220318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.220334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.224469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.224513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.224528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.228122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.228163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.228177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.232262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.232303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.232318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.236853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.236896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.236911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.240706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.240747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.062 [2024-05-15 09:00:30.240761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.062 [2024-05-15 09:00:30.244281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.062 [2024-05-15 09:00:30.244321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.063 [2024-05-15 09:00:30.244336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.063 [2024-05-15 09:00:30.248964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.063 [2024-05-15 09:00:30.249023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.063 [2024-05-15 09:00:30.249046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.063 [2024-05-15 09:00:30.255530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.063 [2024-05-15 09:00:30.255610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.063 [2024-05-15 09:00:30.255634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.063 [2024-05-15 09:00:30.260941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.063 [2024-05-15 09:00:30.261004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.063 [2024-05-15 09:00:30.261030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.063 [2024-05-15 09:00:30.267684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.063 [2024-05-15 09:00:30.267747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.063 [2024-05-15 09:00:30.267772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.063 [2024-05-15 09:00:30.274924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.063 [2024-05-15 09:00:30.274987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.063 [2024-05-15 09:00:30.275013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.063 [2024-05-15 09:00:30.282005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.063 [2024-05-15 09:00:30.282068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.063 [2024-05-15 09:00:30.282094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.326 [2024-05-15 09:00:30.288955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.326 [2024-05-15 09:00:30.289019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.326 [2024-05-15 09:00:30.289043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.326 [2024-05-15 09:00:30.294681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.326 [2024-05-15 09:00:30.294735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.326 [2024-05-15 09:00:30.294750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.326 [2024-05-15 09:00:30.299587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.299628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.299642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.304111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.304151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.304166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.307641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.307682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.307696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.312120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.312160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.312174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.315805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.315854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.315868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.319920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.319975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.319990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.324187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.324227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.324241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.327756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.327797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.327812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.331917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.331959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.331974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.335462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.335502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.335516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.339483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.339525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.339539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.343339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.343378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.343392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.347979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.348019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.348033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.351822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.351862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.351875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.356382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.356422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.356437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.359831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.359871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.359885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.364771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.364813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.364827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.368310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.368353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.368368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.372637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.372678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.372693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.376469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.376510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.376524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.380446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.380486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.380500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.383857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.383896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.383910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.387379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.387418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.387432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.391386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.391429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.391443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.395532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.395587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.395602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.399773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.399813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.399827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.403422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.403464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.403479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.407753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.407792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.407807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.411235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.411275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.411289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.415685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.415725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.415739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.419238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.419278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.419292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.423093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.423134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.423148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.427191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.427230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.427244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.431117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.431157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.431171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.435186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.435227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.435241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.438605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.438644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.438658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.442936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.442976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.442990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.447862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.447904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.447919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.451220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.451260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.451274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.455446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.455486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.455500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.460187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.460226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.460241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.463731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.463769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.463783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.468172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.468213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.468227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.471860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.471899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.471913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.327 [2024-05-15 09:00:30.477480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.327 [2024-05-15 09:00:30.477520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.327 [2024-05-15 09:00:30.477543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.482942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.482983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.482998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.486696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.486736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.486751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.490298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.490341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.490355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.494971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.495025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.495039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.500273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.500314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.500329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.503920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.503959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.503973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.507848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.507888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.507902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.511925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.511965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.511979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.516176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.516215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.516229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.519533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.519590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.519605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.523984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.524023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.524038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.528151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.528190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.528204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.531758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.531800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.531814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.535745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.535786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.535800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.539666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.539705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.539719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.543389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.543428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.543443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.547154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.547197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.547212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.551554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.551605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.551620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.328 [2024-05-15 09:00:30.555244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.328 [2024-05-15 09:00:30.555288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.328 [2024-05-15 09:00:30.555303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.558852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.558893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.558907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.563005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.563050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.563064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.567619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.567657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.567672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.571008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.571050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.571065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.575161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.575214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.575229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.579447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.579491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.579506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.583267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.583311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.583326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.587650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.587691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.587706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.591940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.591981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.591995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.596309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.596350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.596365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.600210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.600251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.600265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.603765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.603805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.603820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.608350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.608390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.589 [2024-05-15 09:00:30.608411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.589 [2024-05-15 09:00:30.612723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.589 [2024-05-15 09:00:30.612764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.612778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.616235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.616280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.616295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.619751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.619790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.619805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.623796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.623836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.623851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.627607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.627646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.627660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.631432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.631472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.631487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.635277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.635317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.635331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.639635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.639679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.639693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.642742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.642780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.642794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.647002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.647043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.647057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.651298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.651342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.651356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.656289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.656332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.656347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.660159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.660200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.660214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.664279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.664324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.664339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.668921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.668962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.668976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.673720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.673759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.673774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.678301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.678341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.678355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.681269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.681307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.681321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.686293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.686334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.686349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.691253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.691293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.691307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.694072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.694111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.694125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.699362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.699405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.699419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.703763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.703802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.703816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.707592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.707630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.707644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.711693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.711732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.711746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.715461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.715503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.715517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.590 [2024-05-15 09:00:30.720015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.590 [2024-05-15 09:00:30.720055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.590 [2024-05-15 09:00:30.720078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.723970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.724011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.724025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.728245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.728285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.728299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.731950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.731990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.732004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.736040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.736089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.736104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.740030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.740079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.740094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.743971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.744011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.744025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.747732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.747772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.747787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.751697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.751736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.751751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.755837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.755882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.755896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.759502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.759542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.759555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.763980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.764020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.764034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.767293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.767333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.767349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.771635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.771675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.771689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.776957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.776998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.777013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.781386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.781428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.781442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.785193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.785235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.785249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.788512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.788553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.788580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.793081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.793122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.793135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.798211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.798264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.798279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.802871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.802917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.802932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.806672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.806711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.806725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.810644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.810683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.810697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.815304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.815345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.815359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.591 [2024-05-15 09:00:30.819277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.591 [2024-05-15 09:00:30.819316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.591 [2024-05-15 09:00:30.819330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.822831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.822869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.822884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.826616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.826657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.826672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.830947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.831010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.831024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.835474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.835518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.835533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.839264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.839300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.839314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.843245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.843284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.843297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.847698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.847736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.847750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.851349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.851386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.851400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.855158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.855195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.855208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.859230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.859266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.859280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.863904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.863942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.863956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.867101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.867138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.867153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.871438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.871476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.871490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.875877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.875919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.875940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.879190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.879228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.879242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.883779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.883818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.883832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.887193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.887236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.887250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.891836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.891877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.891891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.896443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.852 [2024-05-15 09:00:30.896484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.852 [2024-05-15 09:00:30.896498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.852 [2024-05-15 09:00:30.900936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.900984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.900998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.904924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.904960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.904975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.909017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.909055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.909069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.912482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.912517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.912531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.917373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.917413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.917427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.922432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.922471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.922484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.925613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.925648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.925662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.930033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.930071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.930085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.934126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.934164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.934178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.938369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.938408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.938422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.941981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.942017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.942031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.946083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.946121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.946135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.949257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.949295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.949308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.953549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.953599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.953614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.957445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.957477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.957490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.961086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.961123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.961137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.965372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.965408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.965421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.969278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.969314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.969327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.973043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.973077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.973091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.977339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.977374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.977387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.982100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.982134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.982147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.986163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.986198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.986211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.990125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.990161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.990174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.994034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.994069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.994082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:30.997946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:30.997981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:30.997995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:31.002587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:31.002620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:31.002634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:31.007420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:31.007456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:31.007469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:31.010352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:31.010386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:31.010399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:31.015182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:31.015219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:31.015232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:31.019288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:31.019323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.853 [2024-05-15 09:00:31.019337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.853 [2024-05-15 09:00:31.022711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.853 [2024-05-15 09:00:31.022746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.022759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.026577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.026619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.026633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.030111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.030147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.030160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.034429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.034463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.034476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.038551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.038596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.038610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.042434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.042470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.042483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.046850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.046900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.046914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.050496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.050531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.050545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.053987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.054022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.054035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.057971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.058006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.058020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.062147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.062181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.062195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.067197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.067237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.067251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.070153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.070188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.070201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.073480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.073516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.073529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.078608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.078643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.078656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.854 [2024-05-15 09:00:31.082455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:14.854 [2024-05-15 09:00:31.082492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.854 [2024-05-15 09:00:31.082506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.114 [2024-05-15 09:00:31.087425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.114 [2024-05-15 09:00:31.087461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.114 [2024-05-15 09:00:31.087474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.114 [2024-05-15 09:00:31.090715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.114 [2024-05-15 09:00:31.090748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.114 [2024-05-15 09:00:31.090762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.114 [2024-05-15 09:00:31.095714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.114 [2024-05-15 09:00:31.095749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.114 [2024-05-15 09:00:31.095763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.114 [2024-05-15 09:00:31.100196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.114 [2024-05-15 09:00:31.100234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.114 [2024-05-15 09:00:31.100247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.103513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.103548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.103573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.107812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.107847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.107860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.112338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.112375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.112388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.116206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.116242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.116256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.120610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.120643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.120656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.124275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.124310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.124324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.128622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.128662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.128675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.132188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.132224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.132237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.136788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.136824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.136838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.141959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.141995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.142009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.146826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.146861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.146875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.149927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.149961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.149973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.154365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.154400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.154414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.159400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.159436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.159449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.164444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.164479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.164492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.169189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.169225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.169238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.172633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.172670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.172684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.177153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.177223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.177237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.181260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.181295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.181308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.184919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.184956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.184970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.188706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.188743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.188757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.192629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.192663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.192677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.197503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.197545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.197575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.201073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.201120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.201134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.205704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.205756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.205770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.209552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.209616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.209630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.213742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.115 [2024-05-15 09:00:31.213795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.115 [2024-05-15 09:00:31.213809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.115 [2024-05-15 09:00:31.217737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.217773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.217787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.221882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.221927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.221942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.225736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.225773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.225787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.229694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.229732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.229746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.234110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.234149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.234162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.238391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.238428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.238442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.242760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.242798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.242812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.247317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.247365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.247378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.250804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.250840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.250853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.256047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.256106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.256121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.261504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.261543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.261556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.266731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.266769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.266783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.269532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.269580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.269596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.275054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.275091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.275106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.279981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.280018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.280032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.283043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.283083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.283103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.287854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.287894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.287908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.292450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.292489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.292503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.295833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.295871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.295885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.300169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.300219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.300236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.305217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.305267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.305280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.310225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.310267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.310281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.314928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.314983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.315000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.317891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.317930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.317944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.322703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.322753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.322767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.326768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.326812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.326827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.331247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.331294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.116 [2024-05-15 09:00:31.331308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.116 [2024-05-15 09:00:31.336641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.116 [2024-05-15 09:00:31.336681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.117 [2024-05-15 09:00:31.336695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.117 [2024-05-15 09:00:31.340044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.117 [2024-05-15 09:00:31.340094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.117 [2024-05-15 09:00:31.340108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.117 [2024-05-15 09:00:31.344134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.117 [2024-05-15 09:00:31.344173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.117 [2024-05-15 09:00:31.344187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.348200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.348240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.348254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.352381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.352421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.352435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.356225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.356264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.356290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.360488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.360527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.360541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.364095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.364134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.364148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.369444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.369484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.369498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.373850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.373896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.373911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.378069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.378107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.378121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.382806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.382843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.382857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.387782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.387831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.387844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.392004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.392042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.392055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.395818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.395856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.395876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.400525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.400575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.400590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.404967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.405005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.405019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.409454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.409494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.409508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.412644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.412687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.412701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.416765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.416804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.416818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.421029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.421068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.421082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.424124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.424162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.424175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.428610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.428648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.380 [2024-05-15 09:00:31.428662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.380 [2024-05-15 09:00:31.432558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.380 [2024-05-15 09:00:31.432606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.432620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.437153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.437192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.437206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.441962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.442001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.442015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.446903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.446939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.446953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.449695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.449729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.449742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.454860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.454897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.454922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.459149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.459188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.459202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.462455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.462493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.462506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.467002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.467041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.467056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.471687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.471725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.471739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.476500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.476537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.476552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.479546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.479597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.479611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.484033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.484080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.484095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.487519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.487558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.487587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.491634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.491672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.491686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.495662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.495699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.495712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.500675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.500714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.500728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.504055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.504103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.504128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.508162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.508200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.508214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.512282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.512321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.512335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.515714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.515752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.515766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.519892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.519931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.519945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.523692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.523729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.523742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.527857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.527894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.527907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.531945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.531989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.532003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.536391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.536442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.536463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.540462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.540501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.540516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.381 [2024-05-15 09:00:31.544138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.381 [2024-05-15 09:00:31.544177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.381 [2024-05-15 09:00:31.544191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.548352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.548392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.548411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.552618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.552655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.552669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.555995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.556033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.556047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.560191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.560229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.560243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.564257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.564295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.564309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.568006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.568044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.568058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.572303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.572341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.572355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.577124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.577164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.577178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.580921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.580957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.580971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.584647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.584685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.584698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.588094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.588136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.588150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.592796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.592850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.592865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.596645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.596697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.596711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.600926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.600980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.600995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.604630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.604671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.604685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.382 [2024-05-15 09:00:31.608893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.382 [2024-05-15 09:00:31.608933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.382 [2024-05-15 09:00:31.608947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.645 [2024-05-15 09:00:31.612897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.645 [2024-05-15 09:00:31.612935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.645 [2024-05-15 09:00:31.612949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.645 [2024-05-15 09:00:31.616937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.645 [2024-05-15 09:00:31.616976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.645 [2024-05-15 09:00:31.616989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.645 [2024-05-15 09:00:31.620975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.621013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.621027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.625148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.625185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.625199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.628486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.628523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.628537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.632352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.632391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.632405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.636862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.636901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.636916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.640787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.640825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.640839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.644370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.644409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.644423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.648382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.648422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.648435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.652777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.652817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.652831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.656652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.656690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.656704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.660940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.660979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.660992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.664978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.665016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.665030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.668861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.668900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.668913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.672833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.672872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.672885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.676758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.676796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.676809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.681423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.681463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.681477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.684924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.684962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.684977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.689805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.689842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.689855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.695086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.695127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.695141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.700144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.700185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.700199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.703153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.703189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.703203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.708106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.708156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.708170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.713354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.713404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.713418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.716286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.716323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.716337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.646 [2024-05-15 09:00:31.721031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.646 [2024-05-15 09:00:31.721079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.646 [2024-05-15 09:00:31.721093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.725358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.725392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.725406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.729310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.729346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.729360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.733672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.733708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.733721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.737710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.737746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.737760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.741197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.741234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.741248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.745777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.745813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.745827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.749487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.749523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.749537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.753744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.753784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.753798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.757486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.757524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.757538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.761721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.761758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.761771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.766216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.766252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.766266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.769890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.769926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.769946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.773678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.773715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.773730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.777700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.777737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.777751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.782207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.782245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.782259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.785870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.785907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.785920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.790944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.790982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.790996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.795820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.795857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.795871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.800594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.800627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.800640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.803922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.803960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.803973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.809271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.809308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.809322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.813439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.813477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.813491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.817340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.817379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.647 [2024-05-15 09:00:31.817393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.647 [2024-05-15 09:00:31.821459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.647 [2024-05-15 09:00:31.821509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.821525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.826900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.826939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.826953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.830647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.830687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.830701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.835241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.835279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.835293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.840451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.840488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.840502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.843871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.843906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.843919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.848496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.848533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.848548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.853649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.853686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.853699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.856610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.856643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.856656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.861671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.861710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.861723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.866298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.866335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.866349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.869483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.869519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.869535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.648 [2024-05-15 09:00:31.873599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.648 [2024-05-15 09:00:31.873635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.648 [2024-05-15 09:00:31.873649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.931 [2024-05-15 09:00:31.878046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.931 [2024-05-15 09:00:31.878082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.931 [2024-05-15 09:00:31.878096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.931 [2024-05-15 09:00:31.881965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.931 [2024-05-15 09:00:31.882004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.931 [2024-05-15 09:00:31.882019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.931 [2024-05-15 09:00:31.886516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.931 [2024-05-15 09:00:31.886556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.931 [2024-05-15 09:00:31.886584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:15.931 [2024-05-15 09:00:31.889816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.932 [2024-05-15 09:00:31.889853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.932 [2024-05-15 09:00:31.889866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:15.932 [2024-05-15 09:00:31.894519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.932 [2024-05-15 09:00:31.894558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.932 [2024-05-15 09:00:31.894587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:15.932 [2024-05-15 09:00:31.898542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef98b0) 00:20:15.932 [2024-05-15 09:00:31.898591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.932 [2024-05-15 09:00:31.898606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:15.932 00:20:15.932 Latency(us) 00:20:15.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.932 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:15.932 nvme0n1 : 2.00 7380.04 922.51 0.00 0.00 2163.70 677.70 10426.18 00:20:15.932 =================================================================================================================== 00:20:15.932 Total : 7380.04 922.51 0.00 0.00 2163.70 677.70 10426.18 00:20:15.932 0 00:20:15.932 09:00:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:15.932 09:00:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:15.932 09:00:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:15.932 09:00:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:15.932 | .driver_specific 00:20:15.932 | .nvme_error 00:20:15.932 | .status_code 00:20:15.932 | .command_transient_transport_error' 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 476 > 0 )) 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87454 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87454 ']' 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87454 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87454 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:16.190 killing process with pid 87454 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87454' 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87454 00:20:16.190 Received shutdown signal, test time was about 2.000000 seconds 00:20:16.190 00:20:16.190 Latency(us) 00:20:16.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.190 =================================================================================================================== 00:20:16.190 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87454 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87531 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87531 /var/tmp/bperf.sock 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87531 ']' 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:16.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:16.190 09:00:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:16.448 [2024-05-15 09:00:32.470242] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:20:16.448 [2024-05-15 09:00:32.470336] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87531 ] 00:20:16.448 [2024-05-15 09:00:32.605864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.448 [2024-05-15 09:00:32.666838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.392 09:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:17.392 09:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:20:17.392 09:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:17.392 09:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:17.650 09:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:17.650 09:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.650 09:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 09:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.650 09:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:17.650 09:00:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:17.908 nvme0n1 00:20:17.908 09:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:17.908 09:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.908 09:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:17.908 09:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.908 09:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:17.908 09:00:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:18.167 Running I/O for 2 seconds... 00:20:18.167 [2024-05-15 09:00:34.172790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f6458 00:20:18.167 [2024-05-15 09:00:34.173900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.167 [2024-05-15 09:00:34.173936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:18.167 [2024-05-15 09:00:34.185253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f4f40 00:20:18.167 [2024-05-15 09:00:34.186342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.167 [2024-05-15 09:00:34.186381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:18.167 [2024-05-15 09:00:34.196685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ef6a8 00:20:18.167 [2024-05-15 09:00:34.197619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.167 [2024-05-15 09:00:34.197653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:18.167 [2024-05-15 09:00:34.211313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e5220 00:20:18.167 [2024-05-15 09:00:34.213077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.167 [2024-05-15 09:00:34.213110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:18.167 [2024-05-15 09:00:34.219961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190feb58 00:20:18.167 [2024-05-15 09:00:34.220767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.167 [2024-05-15 09:00:34.220799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:18.167 [2024-05-15 09:00:34.234417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f3a28 00:20:18.168 [2024-05-15 09:00:34.235891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.235924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.245753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eff18 00:20:18.168 [2024-05-15 09:00:34.246912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.246947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.257614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eaab8 00:20:18.168 [2024-05-15 09:00:34.258802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.258845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.269931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f5be8 00:20:18.168 [2024-05-15 09:00:34.271082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.271115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.281429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f3a28 00:20:18.168 [2024-05-15 09:00:34.282446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.282480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.296291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f1430 00:20:18.168 [2024-05-15 09:00:34.298149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.298183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.304967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190efae0 00:20:18.168 [2024-05-15 09:00:34.305832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.305868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.319588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ebb98 00:20:18.168 [2024-05-15 09:00:34.321159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.321196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.331200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f4298 00:20:18.168 [2024-05-15 09:00:34.332302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.332341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.343673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eb328 00:20:18.168 [2024-05-15 09:00:34.344971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.345008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.358273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ef270 00:20:18.168 [2024-05-15 09:00:34.360197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.360235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.366905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f57b0 00:20:18.168 [2024-05-15 09:00:34.367855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.367892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.379209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190dfdc0 00:20:18.168 [2024-05-15 09:00:34.380169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.380205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:18.168 [2024-05-15 09:00:34.390825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eb328 00:20:18.168 [2024-05-15 09:00:34.391617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.168 [2024-05-15 09:00:34.391653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.405167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ed920 00:20:18.428 [2024-05-15 09:00:34.406166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.406213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.416062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e1b48 00:20:18.428 [2024-05-15 09:00:34.417192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.417229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.430719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ed920 00:20:18.428 [2024-05-15 09:00:34.432559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.432620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.443054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e5220 00:20:18.428 [2024-05-15 09:00:34.444878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.444927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.453088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e99d8 00:20:18.428 [2024-05-15 09:00:34.453952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.453989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.465450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fc128 00:20:18.428 [2024-05-15 09:00:34.466817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.466853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.480037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e2c28 00:20:18.428 [2024-05-15 09:00:34.482066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.482104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.488674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e1710 00:20:18.428 [2024-05-15 09:00:34.489518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.489556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.503857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ee5c8 00:20:18.428 [2024-05-15 09:00:34.505734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.505773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.512233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f0350 00:20:18.428 [2024-05-15 09:00:34.513105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.513141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.524527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ef6a8 00:20:18.428 [2024-05-15 09:00:34.525379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.525417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.538755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e0ea0 00:20:18.428 [2024-05-15 09:00:34.539759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.539808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.550153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e8d30 00:20:18.428 [2024-05-15 09:00:34.551058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.551095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.561612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f35f0 00:20:18.428 [2024-05-15 09:00:34.562300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.562337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.573529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190df550 00:20:18.428 [2024-05-15 09:00:34.574552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.574598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.584914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f3e60 00:20:18.428 [2024-05-15 09:00:34.585766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.585802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.599983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f9b30 00:20:18.428 [2024-05-15 09:00:34.601849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.601887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.611368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eaab8 00:20:18.428 [2024-05-15 09:00:34.613095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.613133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.621692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ebb98 00:20:18.428 [2024-05-15 09:00:34.623677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.623715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.632107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fda78 00:20:18.428 [2024-05-15 09:00:34.632970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.428 [2024-05-15 09:00:34.633005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:18.428 [2024-05-15 09:00:34.646614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f2d80 00:20:18.429 [2024-05-15 09:00:34.647981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.429 [2024-05-15 09:00:34.648019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:18.429 [2024-05-15 09:00:34.658002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e5220 00:20:18.429 [2024-05-15 09:00:34.659197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.429 [2024-05-15 09:00:34.659232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:18.687 [2024-05-15 09:00:34.669448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e1b48 00:20:18.687 [2024-05-15 09:00:34.670513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.687 [2024-05-15 09:00:34.670551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:18.687 [2024-05-15 09:00:34.680841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f57b0 00:20:18.687 [2024-05-15 09:00:34.681726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.687 [2024-05-15 09:00:34.681762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:18.687 [2024-05-15 09:00:34.692245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e12d8 00:20:18.687 [2024-05-15 09:00:34.693002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.687 [2024-05-15 09:00:34.693040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:18.687 [2024-05-15 09:00:34.706472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f8a50 00:20:18.687 [2024-05-15 09:00:34.708035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.687 [2024-05-15 09:00:34.708080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:18.687 [2024-05-15 09:00:34.718682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e5220 00:20:18.687 [2024-05-15 09:00:34.720235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.687 [2024-05-15 09:00:34.720273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:18.687 [2024-05-15 09:00:34.729595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ea248 00:20:18.688 [2024-05-15 09:00:34.730881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.730918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.741274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e73e0 00:20:18.688 [2024-05-15 09:00:34.742507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.742543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.755667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ddc00 00:20:18.688 [2024-05-15 09:00:34.757593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.757630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.764163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e95a0 00:20:18.688 [2024-05-15 09:00:34.764936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.764972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.777668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e4578 00:20:18.688 [2024-05-15 09:00:34.778639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.778674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.789827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fc128 00:20:18.688 [2024-05-15 09:00:34.791252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.791288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.801042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ddc00 00:20:18.688 [2024-05-15 09:00:34.802221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.802262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.812723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190edd58 00:20:18.688 [2024-05-15 09:00:34.813866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.813901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.826915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f8e88 00:20:18.688 [2024-05-15 09:00:34.828496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.828534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.836764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e9168 00:20:18.688 [2024-05-15 09:00:34.837594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.837630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.848954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f0350 00:20:18.688 [2024-05-15 09:00:34.849798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.849834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.863078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e4578 00:20:18.688 [2024-05-15 09:00:34.864109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.864147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.874456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ef6a8 00:20:18.688 [2024-05-15 09:00:34.875370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.875408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.885246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fdeb0 00:20:18.688 [2024-05-15 09:00:34.886273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.886308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.899646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ef6a8 00:20:18.688 [2024-05-15 09:00:34.901346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.901383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:18.688 [2024-05-15 09:00:34.908180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190edd58 00:20:18.688 [2024-05-15 09:00:34.908914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.688 [2024-05-15 09:00:34.908949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:34.922551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e5ec8 00:20:18.946 [2024-05-15 09:00:34.923978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:34.924013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:34.934648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f4f40 00:20:18.946 [2024-05-15 09:00:34.935578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:34.935615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:34.946611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fcdd0 00:20:18.946 [2024-05-15 09:00:34.947866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:34.947902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:34.957963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e6738 00:20:18.946 [2024-05-15 09:00:34.959047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:34.959084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:34.969330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f92c0 00:20:18.946 [2024-05-15 09:00:34.970285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:34.970322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:34.983588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190dece0 00:20:18.946 [2024-05-15 09:00:34.985344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:34.985381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:34.992155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f81e0 00:20:18.946 [2024-05-15 09:00:34.992945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:34.992983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.006578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e9e10 00:20:18.946 [2024-05-15 09:00:35.008057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.008104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.017799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e27f0 00:20:18.946 [2024-05-15 09:00:35.018994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.019031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.029449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e6738 00:20:18.946 [2024-05-15 09:00:35.030625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.030659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.041470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fcdd0 00:20:18.946 [2024-05-15 09:00:35.042156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.042195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.056286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eaef0 00:20:18.946 [2024-05-15 09:00:35.058301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.058338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.064794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fc560 00:20:18.946 [2024-05-15 09:00:35.065671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.065706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.078497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f3e60 00:20:18.946 [2024-05-15 09:00:35.079585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.079622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.089289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e4140 00:20:18.946 [2024-05-15 09:00:35.090483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.090521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.101347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e1710 00:20:18.946 [2024-05-15 09:00:35.102069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.102108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.113050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eaab8 00:20:18.946 [2024-05-15 09:00:35.114126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.114163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.124749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190dfdc0 00:20:18.946 [2024-05-15 09:00:35.125825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.125861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.136933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fb048 00:20:18.946 [2024-05-15 09:00:35.137995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.138031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.150495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e27f0 00:20:18.946 [2024-05-15 09:00:35.152087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.152129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.161864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190edd58 00:20:18.946 [2024-05-15 09:00:35.163167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.163205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:18.946 [2024-05-15 09:00:35.173589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f0350 00:20:18.946 [2024-05-15 09:00:35.174868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.946 [2024-05-15 09:00:35.174904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.185785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f31b8 00:20:19.205 [2024-05-15 09:00:35.187056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.205 [2024-05-15 09:00:35.187090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.197647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ed920 00:20:19.205 [2024-05-15 09:00:35.198409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.205 [2024-05-15 09:00:35.198445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.209105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fb048 00:20:19.205 [2024-05-15 09:00:35.209769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.205 [2024-05-15 09:00:35.209806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.223239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ee5c8 00:20:19.205 [2024-05-15 09:00:35.224999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.205 [2024-05-15 09:00:35.225034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.231792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fa7d8 00:20:19.205 [2024-05-15 09:00:35.232584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.205 [2024-05-15 09:00:35.232629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.243961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e7c50 00:20:19.205 [2024-05-15 09:00:35.244749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.205 [2024-05-15 09:00:35.244785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.257628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eee38 00:20:19.205 [2024-05-15 09:00:35.259034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.205 [2024-05-15 09:00:35.259070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.269394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fa3a0 00:20:19.205 [2024-05-15 09:00:35.270677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.205 [2024-05-15 09:00:35.270710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.281397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e3d08 00:20:19.205 [2024-05-15 09:00:35.282199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.205 [2024-05-15 09:00:35.282234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.293085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e5ec8 00:20:19.205 [2024-05-15 09:00:35.294264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.205 [2024-05-15 09:00:35.294300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.304847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f57b0 00:20:19.205 [2024-05-15 09:00:35.305985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.205 [2024-05-15 09:00:35.306019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:19.205 [2024-05-15 09:00:35.319883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e99d8 00:20:19.205 [2024-05-15 09:00:35.321773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.206 [2024-05-15 09:00:35.321811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:19.206 [2024-05-15 09:00:35.328583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fbcf0 00:20:19.206 [2024-05-15 09:00:35.329414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.206 [2024-05-15 09:00:35.329448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:19.206 [2024-05-15 09:00:35.342960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f1868 00:20:19.206 [2024-05-15 09:00:35.344473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.206 [2024-05-15 09:00:35.344508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:19.206 [2024-05-15 09:00:35.354158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ebb98 00:20:19.206 [2024-05-15 09:00:35.355399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.206 [2024-05-15 09:00:35.355435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:19.206 [2024-05-15 09:00:35.365870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f1868 00:20:19.206 [2024-05-15 09:00:35.367099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.206 [2024-05-15 09:00:35.367135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:19.206 [2024-05-15 09:00:35.378474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fe2e8 00:20:19.206 [2024-05-15 09:00:35.379877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.206 [2024-05-15 09:00:35.379912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:19.206 [2024-05-15 09:00:35.389750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fef90 00:20:19.206 [2024-05-15 09:00:35.390833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.206 [2024-05-15 09:00:35.390869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:19.206 [2024-05-15 09:00:35.401423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ebfd0 00:20:19.206 [2024-05-15 09:00:35.402510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.206 [2024-05-15 09:00:35.402545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:19.206 [2024-05-15 09:00:35.415826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f46d0 00:20:19.206 [2024-05-15 09:00:35.417592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.206 [2024-05-15 09:00:35.417629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:19.206 [2024-05-15 09:00:35.428011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e1710 00:20:19.206 [2024-05-15 09:00:35.429776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.206 [2024-05-15 09:00:35.429813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.437915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fe720 00:20:19.471 [2024-05-15 09:00:35.438742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.438778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.450153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190df550 00:20:19.471 [2024-05-15 09:00:35.451451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.451488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.462281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fbcf0 00:20:19.471 [2024-05-15 09:00:35.463107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.463144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.473814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e8d30 00:20:19.471 [2024-05-15 09:00:35.474499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.474536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.487514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f1ca0 00:20:19.471 [2024-05-15 09:00:35.489024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.489059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.498905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e8d30 00:20:19.471 [2024-05-15 09:00:35.500228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.500264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.510975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e6fa8 00:20:19.471 [2024-05-15 09:00:35.511962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.511999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.522434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e12d8 00:20:19.471 [2024-05-15 09:00:35.523320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.523356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.533939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e5220 00:20:19.471 [2024-05-15 09:00:35.534620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.534657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.547771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ff3c8 00:20:19.471 [2024-05-15 09:00:35.549278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.549314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.559214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eee38 00:20:19.471 [2024-05-15 09:00:35.560588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.560622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.570304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ed920 00:20:19.471 [2024-05-15 09:00:35.571524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.571559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.582026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190df118 00:20:19.471 [2024-05-15 09:00:35.583225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.583261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.594356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f9b30 00:20:19.471 [2024-05-15 09:00:35.595588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.595623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.608506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ea680 00:20:19.471 [2024-05-15 09:00:35.610364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.610402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.617076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ea680 00:20:19.471 [2024-05-15 09:00:35.617962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.617996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.631540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fbcf0 00:20:19.471 [2024-05-15 09:00:35.633129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.633168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.642812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e84c0 00:20:19.471 [2024-05-15 09:00:35.644133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.644173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.654791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f0350 00:20:19.471 [2024-05-15 09:00:35.656041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.656093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:19.471 [2024-05-15 09:00:35.666892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190dece0 00:20:19.471 [2024-05-15 09:00:35.667664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.471 [2024-05-15 09:00:35.667701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:19.472 [2024-05-15 09:00:35.678589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f46d0 00:20:19.472 [2024-05-15 09:00:35.679744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.472 [2024-05-15 09:00:35.679781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:19.472 [2024-05-15 09:00:35.690364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e12d8 00:20:19.472 [2024-05-15 09:00:35.691344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.472 [2024-05-15 09:00:35.691392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.704742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f0bc0 00:20:19.730 [2024-05-15 09:00:35.706505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.706546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.713411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e7818 00:20:19.730 [2024-05-15 09:00:35.714226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.714267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.725606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ed0b0 00:20:19.730 [2024-05-15 09:00:35.726408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.726445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.739693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e84c0 00:20:19.730 [2024-05-15 09:00:35.741169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.741205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.750873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190df550 00:20:19.730 [2024-05-15 09:00:35.752058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.752105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.762589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f5378 00:20:19.730 [2024-05-15 09:00:35.763756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.763792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.774791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f1430 00:20:19.730 [2024-05-15 09:00:35.775950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.775985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.786271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e84c0 00:20:19.730 [2024-05-15 09:00:35.787292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.787328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.800480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f9b30 00:20:19.730 [2024-05-15 09:00:35.802244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.802280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.812376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eea00 00:20:19.730 [2024-05-15 09:00:35.813584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.813617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.824425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eb760 00:20:19.730 [2024-05-15 09:00:35.825642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.825676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.838180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e5658 00:20:19.730 [2024-05-15 09:00:35.840041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.840091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.846801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fdeb0 00:20:19.730 [2024-05-15 09:00:35.847688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.847723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.859527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f1868 00:20:19.730 [2024-05-15 09:00:35.860593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.860633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.874041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f0788 00:20:19.730 [2024-05-15 09:00:35.875742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.875781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.882607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eaef0 00:20:19.730 [2024-05-15 09:00:35.883334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.883371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.895142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f4f40 00:20:19.730 [2024-05-15 09:00:35.896039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.896085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.909589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f7970 00:20:19.730 [2024-05-15 09:00:35.911162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.911199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.921828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190edd58 00:20:19.730 [2024-05-15 09:00:35.923386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.923422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.933277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fda78 00:20:19.730 [2024-05-15 09:00:35.934711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.934746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.944684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e6738 00:20:19.730 [2024-05-15 09:00:35.945920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.945957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:19.730 [2024-05-15 09:00:35.956076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f2d80 00:20:19.730 [2024-05-15 09:00:35.957189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.730 [2024-05-15 09:00:35.957227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:19.990 [2024-05-15 09:00:35.970926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ef270 00:20:19.990 [2024-05-15 09:00:35.972868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.990 [2024-05-15 09:00:35.972909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:19.990 [2024-05-15 09:00:35.979422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f1868 00:20:19.990 [2024-05-15 09:00:35.980218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.990 [2024-05-15 09:00:35.980254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:19.990 [2024-05-15 09:00:35.992735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f8618 00:20:19.990 [2024-05-15 09:00:35.994000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.990 [2024-05-15 09:00:35.994037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:19.990 [2024-05-15 09:00:36.006795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eb328 00:20:19.990 [2024-05-15 09:00:36.008706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.990 [2024-05-15 09:00:36.008742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:19.990 [2024-05-15 09:00:36.015339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fb8b8 00:20:19.990 [2024-05-15 09:00:36.016316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.990 [2024-05-15 09:00:36.016353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:19.990 [2024-05-15 09:00:36.027992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e4578 00:20:19.990 [2024-05-15 09:00:36.029108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.990 [2024-05-15 09:00:36.029144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:19.990 [2024-05-15 09:00:36.040083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f5be8 00:20:19.990 [2024-05-15 09:00:36.040722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.990 [2024-05-15 09:00:36.040757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:19.990 [2024-05-15 09:00:36.054950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190fef90 00:20:19.990 [2024-05-15 09:00:36.056907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.990 [2024-05-15 09:00:36.056947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:19.990 [2024-05-15 09:00:36.063609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f2948 00:20:19.990 [2024-05-15 09:00:36.064584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.991 [2024-05-15 09:00:36.064621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:19.991 [2024-05-15 09:00:36.076132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190eb760 00:20:19.991 [2024-05-15 09:00:36.077278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.991 [2024-05-15 09:00:36.077314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:19.991 [2024-05-15 09:00:36.088225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190ea248 00:20:19.991 [2024-05-15 09:00:36.088893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.991 [2024-05-15 09:00:36.088931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:19.991 [2024-05-15 09:00:36.101979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f2d80 00:20:19.991 [2024-05-15 09:00:36.103439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.991 [2024-05-15 09:00:36.103476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:19.991 [2024-05-15 09:00:36.112589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e23b8 00:20:19.991 [2024-05-15 09:00:36.114521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.991 [2024-05-15 09:00:36.114559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:19.991 [2024-05-15 09:00:36.123001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190e5220 00:20:19.991 [2024-05-15 09:00:36.123828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.991 [2024-05-15 09:00:36.123862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:19.991 [2024-05-15 09:00:36.137369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190f9b30 00:20:19.991 [2024-05-15 09:00:36.138891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.991 [2024-05-15 09:00:36.138927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:19.991 [2024-05-15 09:00:36.149542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf9e70) with pdu=0x2000190df988 00:20:19.991 [2024-05-15 09:00:36.151047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.991 [2024-05-15 09:00:36.151081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:19.991 00:20:19.991 Latency(us) 00:20:19.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.991 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:19.991 nvme0n1 : 2.01 21106.63 82.45 0.00 0.00 6054.42 2502.28 15609.48 00:20:19.991 =================================================================================================================== 00:20:19.991 Total : 21106.63 82.45 0.00 0.00 6054.42 2502.28 15609.48 00:20:19.991 0 00:20:19.991 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:19.991 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:19.991 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:19.991 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:19.991 | .driver_specific 00:20:19.991 | .nvme_error 00:20:19.991 | .status_code 00:20:19.991 | .command_transient_transport_error' 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87531 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87531 ']' 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87531 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87531 00:20:20.251 killing process with pid 87531 00:20:20.251 Received shutdown signal, test time was about 2.000000 seconds 00:20:20.251 00:20:20.251 Latency(us) 00:20:20.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.251 =================================================================================================================== 00:20:20.251 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87531' 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87531 00:20:20.251 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87531 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87619 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87619 /var/tmp/bperf.sock 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87619 ']' 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:20.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:20.510 09:00:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.510 [2024-05-15 09:00:36.694937] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:20:20.510 [2024-05-15 09:00:36.695330] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefixI/O size of 131072 is greater than zero copy threshold (65536). 00:20:20.510 Zero copy mechanism will not be used. 00:20:20.510 =spdk_pid87619 ] 00:20:20.770 [2024-05-15 09:00:36.839354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.770 [2024-05-15 09:00:36.898706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.727 09:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:21.727 09:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:20:21.727 09:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:21.727 09:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:21.727 09:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:21.727 09:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.727 09:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:21.986 09:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.986 09:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:21.986 09:00:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:22.245 nvme0n1 00:20:22.245 09:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:22.245 09:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.245 09:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:22.245 09:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.245 09:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:22.245 09:00:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:22.503 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:22.503 Zero copy mechanism will not be used. 00:20:22.503 Running I/O for 2 seconds... 00:20:22.503 [2024-05-15 09:00:38.497964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.498308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.498348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.503316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.503644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.503684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.508630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.508939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.508978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.513938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.514241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.514279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.519413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.519744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.519784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.524759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.525058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.525096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.530035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.530347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.530384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.535313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.535629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.535667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.540500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.540820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.540859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.545760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.546063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.546113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.551002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.551305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.551342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.556331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.556642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.556680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.561588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.561888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.561926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.566856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.567153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.567192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.572131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.572430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.572468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.577359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.577673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.577710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.582646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.582949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.582988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.587903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.588210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.588256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.593171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.593467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.593507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.598422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.598737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.598780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.603787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.604109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.604148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.609061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.609358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.609397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.614352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.614663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.614701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.619621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.619916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.619954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.624885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.625181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.625218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.630133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.630430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.630464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.635381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.635691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.635724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.640618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.640915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.640959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.646265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.646602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.646634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.652708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.653072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.653111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.660770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.661121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.661159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.667388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.667720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.667755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.672834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.673149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.673184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.678177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.678476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.678511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.683466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.683794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.683850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.688779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.689093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.689131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.694082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.694380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.503 [2024-05-15 09:00:38.694415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.503 [2024-05-15 09:00:38.699333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.503 [2024-05-15 09:00:38.699643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.504 [2024-05-15 09:00:38.699675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.504 [2024-05-15 09:00:38.704557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.504 [2024-05-15 09:00:38.704867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.504 [2024-05-15 09:00:38.704902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.504 [2024-05-15 09:00:38.709788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.504 [2024-05-15 09:00:38.710085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.504 [2024-05-15 09:00:38.710135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.504 [2024-05-15 09:00:38.715052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.504 [2024-05-15 09:00:38.715347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.504 [2024-05-15 09:00:38.715381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.504 [2024-05-15 09:00:38.720268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.504 [2024-05-15 09:00:38.720591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.504 [2024-05-15 09:00:38.720629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.504 [2024-05-15 09:00:38.725555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.504 [2024-05-15 09:00:38.725877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.504 [2024-05-15 09:00:38.725912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.504 [2024-05-15 09:00:38.730768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.504 [2024-05-15 09:00:38.731066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.504 [2024-05-15 09:00:38.731104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.736038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.736342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.736377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.741254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.741589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.741627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.746882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.747195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.747231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.752516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.752847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.752881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.758122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.758434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.758476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.763403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.763713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.763747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.769062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.769387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.769423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.775841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.776174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.776208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.781271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.781590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.781623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.786676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.786979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.787012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.792533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.792898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.792938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.798770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.799086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.799131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.804369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.804702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.804740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.809659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.809957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.809992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.814963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.815266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.815300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.820189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.820490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.820530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.825587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.825904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.825942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.831071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.831368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.831411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.836349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.836664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.836699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.841686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.841985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.842020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.847212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.847526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.847572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.852464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.852773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.764 [2024-05-15 09:00:38.852808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.764 [2024-05-15 09:00:38.857713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.764 [2024-05-15 09:00:38.858027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.858069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.863172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.863470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.863502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.868362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.868677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.868716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.873634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.873930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.873974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.878885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.879182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.879216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.884207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.884517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.884555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.889470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.889781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.889814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.894737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.895035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.895068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.899944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.900251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.900293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.905231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.905526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.905573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.910465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.910775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.910810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.915799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.916105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.916128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.921204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.921518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.921551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.926501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.926828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.926865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.931729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.932038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.932088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.937066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.937378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.937419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.942316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.942640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.942676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.947624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.947938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.947970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.952857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.953154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.953186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.958147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.958464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.958501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.963400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.963708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.963752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.968656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.968951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.968985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.973951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.974249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.974288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.979472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.979809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.979846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.984831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.985128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.985160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.990020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.990315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.990348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.765 [2024-05-15 09:00:38.995239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:22.765 [2024-05-15 09:00:38.995535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.765 [2024-05-15 09:00:38.995577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.025 [2024-05-15 09:00:39.000483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.025 [2024-05-15 09:00:39.000828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.025 [2024-05-15 09:00:39.000862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.025 [2024-05-15 09:00:39.006003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.025 [2024-05-15 09:00:39.006320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.025 [2024-05-15 09:00:39.006354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.025 [2024-05-15 09:00:39.011215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.025 [2024-05-15 09:00:39.011531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.025 [2024-05-15 09:00:39.011590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.025 [2024-05-15 09:00:39.016522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.025 [2024-05-15 09:00:39.016837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.025 [2024-05-15 09:00:39.016873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.025 [2024-05-15 09:00:39.021762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.025 [2024-05-15 09:00:39.022060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.025 [2024-05-15 09:00:39.022093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.025 [2024-05-15 09:00:39.027026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.025 [2024-05-15 09:00:39.027339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.025 [2024-05-15 09:00:39.027377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.025 [2024-05-15 09:00:39.032305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.025 [2024-05-15 09:00:39.032633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.025 [2024-05-15 09:00:39.032670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.025 [2024-05-15 09:00:39.037507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.037816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.037850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.042722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.043019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.043061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.048005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.048320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.048353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.053285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.053617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.053660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.058551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.058869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.058909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.063845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.064154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.064188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.069088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.069383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.069422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.074300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.074624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.074665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.079557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.079873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.079908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.084868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.085165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.085203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.090051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.090348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.090386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.095340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.095667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.095704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.101007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.101309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.101344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.106624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.106940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.106975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.111803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.112111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.112146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.117368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.117698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.117738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.122684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.122981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.123012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.127902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.128215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.128249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.133629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.133946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.133980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.138932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.139230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.139263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.144181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.144489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.144527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.149526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.149853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.149892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.154757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.155055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.155091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.160019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.160336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.160368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.165553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.165903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.165940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.170988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.171287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.171325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.176276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.026 [2024-05-15 09:00:39.176603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.026 [2024-05-15 09:00:39.176639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.026 [2024-05-15 09:00:39.181486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.181793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.181836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.186728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.187024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.187055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.191897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.192203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.192235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.197378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.197732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.197769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.202697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.202992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.203020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.207897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.208203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.208236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.213122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.213418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.213453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.218358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.218667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.218705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.223630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.223939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.223971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.229279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.229618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.229660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.234719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.235015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.235057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.240384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.240701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.240740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.245942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.246256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.246291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.251416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.251730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.251763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.027 [2024-05-15 09:00:39.256716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.027 [2024-05-15 09:00:39.257039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.027 [2024-05-15 09:00:39.257077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.286 [2024-05-15 09:00:39.262320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.286 [2024-05-15 09:00:39.262667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.286 [2024-05-15 09:00:39.262708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.286 [2024-05-15 09:00:39.267552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.286 [2024-05-15 09:00:39.267877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.286 [2024-05-15 09:00:39.267924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.286 [2024-05-15 09:00:39.272891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.286 [2024-05-15 09:00:39.273216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.286 [2024-05-15 09:00:39.273264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.286 [2024-05-15 09:00:39.278210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.286 [2024-05-15 09:00:39.278524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.286 [2024-05-15 09:00:39.278575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.286 [2024-05-15 09:00:39.283415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.286 [2024-05-15 09:00:39.283727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.286 [2024-05-15 09:00:39.283759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.286 [2024-05-15 09:00:39.288989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.286 [2024-05-15 09:00:39.289291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.286 [2024-05-15 09:00:39.289328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.286 [2024-05-15 09:00:39.294208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.286 [2024-05-15 09:00:39.294506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.286 [2024-05-15 09:00:39.294544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.286 [2024-05-15 09:00:39.299421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.286 [2024-05-15 09:00:39.299732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.286 [2024-05-15 09:00:39.299770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.286 [2024-05-15 09:00:39.304649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.286 [2024-05-15 09:00:39.304948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.286 [2024-05-15 09:00:39.304986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.286 [2024-05-15 09:00:39.309905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.286 [2024-05-15 09:00:39.310203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.286 [2024-05-15 09:00:39.310241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.286 [2024-05-15 09:00:39.315118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.286 [2024-05-15 09:00:39.315415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.286 [2024-05-15 09:00:39.315453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.320700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.321000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.321038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.325916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.326213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.326252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.331151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.331449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.331485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.336373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.336684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.336722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.341601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.341915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.341954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.347114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.347449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.347491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.352445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.352755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.352794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.357619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.357932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.357969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.362925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.363223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.363260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.368168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.368467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.368504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.373415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.373728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.373763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.378941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.379238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.379277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.384196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.384525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.384574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.389462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.389790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.389827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.394724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.395028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.395068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.399977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.400294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.400332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.405253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.405589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.405626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.410843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.411161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.411200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.416095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.416389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.416428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.421350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.421669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.421703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.426606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.426912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.426949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.431929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.432246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.432286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.437538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.437879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.437917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.442881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.443181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.443219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.448198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.448496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.448534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.287 [2024-05-15 09:00:39.453450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.287 [2024-05-15 09:00:39.453759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.287 [2024-05-15 09:00:39.453797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.458713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.459009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.459053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.463960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.464272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.464318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.469615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.469938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.469976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.474826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.475134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.475179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.480221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.480535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.480587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.485418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.485726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.485765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.491036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.491347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.491388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.496347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.496672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.496712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.501612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.501909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.501942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.506917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.507226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.507268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.512193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.512497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.512535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.288 [2024-05-15 09:00:39.517797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.288 [2024-05-15 09:00:39.518094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.288 [2024-05-15 09:00:39.518133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.547 [2024-05-15 09:00:39.523008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.547 [2024-05-15 09:00:39.523304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.547 [2024-05-15 09:00:39.523341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.547 [2024-05-15 09:00:39.528270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.547 [2024-05-15 09:00:39.528607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.547 [2024-05-15 09:00:39.528645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.547 [2024-05-15 09:00:39.533503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.547 [2024-05-15 09:00:39.533815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.547 [2024-05-15 09:00:39.533861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.547 [2024-05-15 09:00:39.539091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.547 [2024-05-15 09:00:39.539391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.547 [2024-05-15 09:00:39.539429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.547 [2024-05-15 09:00:39.544413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.547 [2024-05-15 09:00:39.544725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.547 [2024-05-15 09:00:39.544763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.547 [2024-05-15 09:00:39.549649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.547 [2024-05-15 09:00:39.549945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.547 [2024-05-15 09:00:39.549984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.547 [2024-05-15 09:00:39.554852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.547 [2024-05-15 09:00:39.555149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.547 [2024-05-15 09:00:39.555183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.547 [2024-05-15 09:00:39.560037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.547 [2024-05-15 09:00:39.560344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.547 [2024-05-15 09:00:39.560386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.547 [2024-05-15 09:00:39.565630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.547 [2024-05-15 09:00:39.565928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.547 [2024-05-15 09:00:39.565962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.547 [2024-05-15 09:00:39.570824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.547 [2024-05-15 09:00:39.571120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.547 [2024-05-15 09:00:39.571158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.547 [2024-05-15 09:00:39.576029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.547 [2024-05-15 09:00:39.576345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.576378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.581245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.581545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.581588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.586865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.587178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.587212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.592124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.592421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.592455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.597318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.597643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.597675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.602505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.602815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.602847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.607737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.608040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.608089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.613213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.613528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.613574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.618402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.618712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.618746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.623649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.623945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.623977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.628860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.629155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.629191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.634330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.634658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.634691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.639539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.639849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.639887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.644733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.645030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.645069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.649943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.650258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.650290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.655174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.655472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.655504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.660761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.661059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.661098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.665941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.666238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.666276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.671192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.671489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.671521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.676390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.676701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.676743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.682086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.682404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.682442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.687588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.687906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.687947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.692987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.693286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.693325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.698553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.698874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.698913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.704146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.704474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.704513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.709907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.710222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.710258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.715456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.715782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.715818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.548 [2024-05-15 09:00:39.721133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.548 [2024-05-15 09:00:39.721447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.548 [2024-05-15 09:00:39.721490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.549 [2024-05-15 09:00:39.726682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.549 [2024-05-15 09:00:39.726985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.549 [2024-05-15 09:00:39.727018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.549 [2024-05-15 09:00:39.732883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.549 [2024-05-15 09:00:39.733209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.549 [2024-05-15 09:00:39.733247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.549 [2024-05-15 09:00:39.738635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.549 [2024-05-15 09:00:39.738946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.549 [2024-05-15 09:00:39.738985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.549 [2024-05-15 09:00:39.744091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.549 [2024-05-15 09:00:39.744400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.549 [2024-05-15 09:00:39.744441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.549 [2024-05-15 09:00:39.749416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.549 [2024-05-15 09:00:39.749731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.549 [2024-05-15 09:00:39.749769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.549 [2024-05-15 09:00:39.755015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.549 [2024-05-15 09:00:39.755314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.549 [2024-05-15 09:00:39.755352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.549 [2024-05-15 09:00:39.760394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.549 [2024-05-15 09:00:39.760705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.549 [2024-05-15 09:00:39.760738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.549 [2024-05-15 09:00:39.765672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.549 [2024-05-15 09:00:39.765985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.549 [2024-05-15 09:00:39.766025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.549 [2024-05-15 09:00:39.770994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.549 [2024-05-15 09:00:39.771293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.549 [2024-05-15 09:00:39.771332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.549 [2024-05-15 09:00:39.776189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.549 [2024-05-15 09:00:39.776487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.549 [2024-05-15 09:00:39.776526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.781798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.782099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.782137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.787078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.787385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.787423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.792322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.792635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.792672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.797530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.797845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.797882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.803118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.803441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.803479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.808424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.808735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.808773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.813678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.813976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.814014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.818955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.819266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.819299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.824533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.824862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.824900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.829784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.830083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.830120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.835017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.835319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.835353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.840250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.840548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.840597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.845440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.845751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.808 [2024-05-15 09:00:39.845793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.808 [2024-05-15 09:00:39.850703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.808 [2024-05-15 09:00:39.850999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.851036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.855921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.856230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.856262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.861143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.861448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.861485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.866410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.866721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.866758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.871827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.872140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.872183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.877075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.877379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.877417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.882331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.882658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.882695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.887622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.887916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.887953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.892835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.893129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.893166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.898077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.898375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.898413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.903278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.903602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.903638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.908509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.908820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.908857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.913812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.914110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.914147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.919022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.919320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.919363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.924313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.924642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.924679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.929504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.929818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.929856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.934709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.935006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.935044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.939915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.940227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.940266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.945114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.945415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.945453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.950339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.950653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.950692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.955526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.955838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.955876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.960813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.961114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.961149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.966066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.966368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.966401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.971302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.971608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.971646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.809 [2024-05-15 09:00:39.976523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.809 [2024-05-15 09:00:39.976829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.809 [2024-05-15 09:00:39.976867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.810 [2024-05-15 09:00:39.981677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.810 [2024-05-15 09:00:39.981974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.810 [2024-05-15 09:00:39.982007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.810 [2024-05-15 09:00:39.986871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.810 [2024-05-15 09:00:39.987180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.810 [2024-05-15 09:00:39.987227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.810 [2024-05-15 09:00:39.992190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.810 [2024-05-15 09:00:39.992488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.810 [2024-05-15 09:00:39.992528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.810 [2024-05-15 09:00:39.997355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.810 [2024-05-15 09:00:39.997666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.810 [2024-05-15 09:00:39.997723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.810 [2024-05-15 09:00:40.002688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.810 [2024-05-15 09:00:40.002987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.810 [2024-05-15 09:00:40.003025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.810 [2024-05-15 09:00:40.007951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.810 [2024-05-15 09:00:40.008257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.810 [2024-05-15 09:00:40.008301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.810 [2024-05-15 09:00:40.013229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.810 [2024-05-15 09:00:40.013547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.810 [2024-05-15 09:00:40.013605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.810 [2024-05-15 09:00:40.018446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.810 [2024-05-15 09:00:40.018759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.810 [2024-05-15 09:00:40.018795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.810 [2024-05-15 09:00:40.024210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.810 [2024-05-15 09:00:40.024612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.810 [2024-05-15 09:00:40.024655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:23.810 [2024-05-15 09:00:40.031160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.810 [2024-05-15 09:00:40.031472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.810 [2024-05-15 09:00:40.031511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:23.810 [2024-05-15 09:00:40.036442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:23.810 [2024-05-15 09:00:40.036751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.810 [2024-05-15 09:00:40.036790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.041705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.042005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.042042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.046973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.047286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.047332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.052304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.052642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.052680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.057546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.057870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.057912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.062835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.063132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.063171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.068050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.068357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.068395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.073296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.073630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.073668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.078629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.078943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.078977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.083821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.084147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.084182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.088996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.089299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.089331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.094243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.094558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.094603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.099473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.099779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.099823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.104813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.105111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.105144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.110041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.110341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.110367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.115304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.115630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.115670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.120550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.120868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.120908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.125842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.126141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.126185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.131109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.131405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.068 [2024-05-15 09:00:40.131444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.068 [2024-05-15 09:00:40.136321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.068 [2024-05-15 09:00:40.136631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.136665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.141528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.141842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.141880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.146769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.147096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.147134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.152028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.152358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.152396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.157243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.157545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.157595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.162465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.162776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.162815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.167809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.168125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.168167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.173117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.173411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.173446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.178372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.178693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.178733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.183704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.184007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.184044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.188975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.189279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.189315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.194242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.194574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.194610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.199457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.199765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.199797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.204688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.204984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.205022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.209916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.210211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.210251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.215186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.215482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.215519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.220748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.221052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.221088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.226301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.226626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.226662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.231601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.231906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.231952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.236819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.237132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.237178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.242079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.242377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.242410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.247291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.247615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.247649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.252612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.252909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.252947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.257888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.258182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.258220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.263081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.263376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.263414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.268319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.268627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.268659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.273485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.273797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.273836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.069 [2024-05-15 09:00:40.278662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.069 [2024-05-15 09:00:40.278957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.069 [2024-05-15 09:00:40.278999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.070 [2024-05-15 09:00:40.283846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.070 [2024-05-15 09:00:40.284153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.070 [2024-05-15 09:00:40.284189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.070 [2024-05-15 09:00:40.289044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.070 [2024-05-15 09:00:40.289337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.070 [2024-05-15 09:00:40.289379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.070 [2024-05-15 09:00:40.294270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.070 [2024-05-15 09:00:40.294589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.070 [2024-05-15 09:00:40.294624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.070 [2024-05-15 09:00:40.299474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.070 [2024-05-15 09:00:40.299790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.070 [2024-05-15 09:00:40.299824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.304721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.305025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.305064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.309980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.310277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.310320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.315265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.315598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.315631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.320438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.320748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.320780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.325638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.325935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.325968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.330828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.331124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.331158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.336027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.336334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.336373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.341305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.341630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.341668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.346534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.346846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.346890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.351788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.352096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.352128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.357007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.357304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.357339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.362247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.362573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.362611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.367414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.367722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.367754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.372654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.372966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.373003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.377889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.378186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.378218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.383087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.383390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.383422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.388321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.388635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.388669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.393526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.393856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.393891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.398748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.399046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.399083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.403983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.404300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.404333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.409258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.357 [2024-05-15 09:00:40.409592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.357 [2024-05-15 09:00:40.409627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.357 [2024-05-15 09:00:40.414532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.414847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.414881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.419884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.420195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.420238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.425162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.425458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.425498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.430393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.430711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.430748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.435647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.435960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.435999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.440846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.441149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.441183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.446077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.446372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.446405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.451277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.451583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.451607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.456788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.457091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.457130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.462338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.462656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.462685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.467866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.468191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.468224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.473080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.473379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.473415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.478511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.478859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.478898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.483925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.484237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.484276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.358 [2024-05-15 09:00:40.489199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa33370) with pdu=0x2000190fef90 00:20:24.358 [2024-05-15 09:00:40.489498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.358 [2024-05-15 09:00:40.489537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.358 00:20:24.358 Latency(us) 00:20:24.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.358 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:24.358 nvme0n1 : 2.00 5805.30 725.66 0.00 0.00 2749.73 1474.56 7626.01 00:20:24.358 =================================================================================================================== 00:20:24.358 Total : 5805.30 725.66 0.00 0.00 2749.73 1474.56 7626.01 00:20:24.358 0 00:20:24.358 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:24.358 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:24.358 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:24.358 | .driver_specific 00:20:24.358 | .nvme_error 00:20:24.358 | .status_code 00:20:24.358 | .command_transient_transport_error' 00:20:24.358 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 374 > 0 )) 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87619 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87619 ']' 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87619 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87619 00:20:24.637 killing process with pid 87619 00:20:24.637 Received shutdown signal, test time was about 2.000000 seconds 00:20:24.637 00:20:24.637 Latency(us) 00:20:24.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.637 =================================================================================================================== 00:20:24.637 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87619' 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87619 00:20:24.637 09:00:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87619 00:20:24.896 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 87320 00:20:24.896 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87320 ']' 00:20:24.896 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87320 00:20:24.896 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:20:24.896 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:24.896 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87320 00:20:24.896 killing process with pid 87320 00:20:24.896 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:24.896 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:24.896 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87320' 00:20:24.896 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87320 00:20:24.896 [2024-05-15 09:00:41.036321] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:24.896 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87320 00:20:25.155 00:20:25.155 real 0m17.906s 00:20:25.155 user 0m34.667s 00:20:25.155 sys 0m4.281s 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:25.155 ************************************ 00:20:25.155 END TEST nvmf_digest_error 00:20:25.155 ************************************ 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.155 rmmod nvme_tcp 00:20:25.155 rmmod nvme_fabrics 00:20:25.155 rmmod nvme_keyring 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 87320 ']' 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 87320 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 87320 ']' 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 87320 00:20:25.155 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (87320) - No such process 00:20:25.155 Process with pid 87320 is not found 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 87320 is not found' 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.155 09:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.414 09:00:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:25.414 00:20:25.414 real 0m35.947s 00:20:25.414 user 1m8.833s 00:20:25.414 sys 0m8.946s 00:20:25.414 09:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:25.414 09:00:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:25.414 ************************************ 00:20:25.414 END TEST nvmf_digest 00:20:25.414 ************************************ 00:20:25.414 09:00:41 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 1 -eq 1 ]] 00:20:25.414 09:00:41 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ tcp == \t\c\p ]] 00:20:25.414 09:00:41 nvmf_tcp -- nvmf/nvmf.sh@111 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:25.414 09:00:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:25.414 09:00:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:25.414 09:00:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:25.414 ************************************ 00:20:25.414 START TEST nvmf_mdns_discovery 00:20:25.414 ************************************ 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:25.414 * Looking for test storage... 00:20:25.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.414 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:25.415 Cannot find device "nvmf_tgt_br" 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:25.415 Cannot find device "nvmf_tgt_br2" 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:25.415 Cannot find device "nvmf_tgt_br" 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:25.415 Cannot find device "nvmf_tgt_br2" 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:20:25.415 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:25.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:20:25.674 00:20:25.674 --- 10.0.0.2 ping statistics --- 00:20:25.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.674 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:25.674 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:25.674 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:25.674 00:20:25.674 --- 10.0.0.3 ping statistics --- 00:20:25.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.674 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:25.674 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:25.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:20:25.932 00:20:25.932 --- 10.0.0.1 ping statistics --- 00:20:25.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.932 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=87915 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 87915 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 87915 ']' 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:25.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:25.932 09:00:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.932 [2024-05-15 09:00:41.989656] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:20:25.932 [2024-05-15 09:00:41.990427] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.932 [2024-05-15 09:00:42.136225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.188 [2024-05-15 09:00:42.232648] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.188 [2024-05-15 09:00:42.232711] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.188 [2024-05-15 09:00:42.232727] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.188 [2024-05-15 09:00:42.232738] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.188 [2024-05-15 09:00:42.232747] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.188 [2024-05-15 09:00:42.232775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.118 [2024-05-15 09:00:43.122063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.118 [2024-05-15 09:00:43.133967] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:27.118 [2024-05-15 09:00:43.134267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.118 null0 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.118 null1 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.118 null2 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.118 null3 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # hostpid=87965 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # waitforlisten 87965 /tmp/host.sock 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 87965 ']' 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:27.118 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:27.118 09:00:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.118 [2024-05-15 09:00:43.250308] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:20:27.118 [2024-05-15 09:00:43.250412] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87965 ] 00:20:27.375 [2024-05-15 09:00:43.388032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.375 [2024-05-15 09:00:43.457665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.308 09:00:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:28.308 09:00:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:20:28.308 09:00:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:28.308 09:00:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:20:28.308 09:00:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:20:28.308 09:00:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # avahipid=87993 00:20:28.308 09:00:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # sleep 1 00:20:28.308 09:00:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:20:28.308 09:00:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:20:28.308 Process 999 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:28.308 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:28.308 Successfully dropped root privileges. 00:20:28.308 avahi-daemon 0.8 starting up. 00:20:28.308 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:28.308 Successfully called chroot(). 00:20:28.308 Successfully dropped remaining capabilities. 00:20:29.239 No service file found in /etc/avahi/services. 00:20:29.239 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:29.239 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:29.239 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:29.239 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:29.239 Network interface enumeration completed. 00:20:29.239 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:20:29.239 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:20:29.239 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:20:29.240 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:20:29.240 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 791395104. 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # notify_id=0 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.240 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:20:29.497 [2024-05-15 09:00:45.681569] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:20:29.497 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:20:29.498 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.498 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:20:29.498 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.498 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:20:29.498 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.498 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.755 [2024-05-15 09:00:45.746811] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.755 [2024-05-15 09:00:45.786829] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.755 [2024-05-15 09:00:45.794747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # avahi_clientpid=88045 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:20:29.755 09:00:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:20:30.687 [2024-05-15 09:00:46.581582] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:30.688 Established under name 'CDC' 00:20:30.945 [2024-05-15 09:00:46.981595] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:20:30.945 [2024-05-15 09:00:46.981648] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:20:30.945 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:20:30.945 cookie is 0 00:20:30.945 is_local: 1 00:20:30.945 our_own: 0 00:20:30.945 wide_area: 0 00:20:30.945 multicast: 1 00:20:30.945 cached: 1 00:20:30.945 [2024-05-15 09:00:47.081578] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:20:30.945 [2024-05-15 09:00:47.081650] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:20:30.945 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:20:30.945 cookie is 0 00:20:30.945 is_local: 1 00:20:30.945 our_own: 0 00:20:30.945 wide_area: 0 00:20:30.945 multicast: 1 00:20:30.945 cached: 1 00:20:31.878 [2024-05-15 09:00:47.988806] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:31.878 [2024-05-15 09:00:47.988844] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:31.878 [2024-05-15 09:00:47.988864] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:31.878 [2024-05-15 09:00:48.074964] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:20:31.878 [2024-05-15 09:00:48.088524] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:31.878 [2024-05-15 09:00:48.088547] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:31.878 [2024-05-15 09:00:48.088582] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:32.137 [2024-05-15 09:00:48.135273] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:32.137 [2024-05-15 09:00:48.135313] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:32.137 [2024-05-15 09:00:48.177458] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:20:32.137 [2024-05-15 09:00:48.239006] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:32.137 [2024-05-15 09:00:48.239061] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # sort 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # xargs 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # sort 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # xargs 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.716 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.975 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:34.975 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:20:34.975 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:34.975 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:20:34.975 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.975 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:20:34.975 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.975 09:00:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=2 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=2 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.975 09:00:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=2 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=4 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.384 [2024-05-15 09:00:52.322114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:36.384 [2024-05-15 09:00:52.323160] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:36.384 [2024-05-15 09:00:52.323202] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:36.384 [2024-05-15 09:00:52.323241] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:36.384 [2024-05-15 09:00:52.323255] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.384 [2024-05-15 09:00:52.330039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:36.384 [2024-05-15 09:00:52.331153] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:36.384 [2024-05-15 09:00:52.331225] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.384 09:00:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:20:36.384 [2024-05-15 09:00:52.462258] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:20:36.384 [2024-05-15 09:00:52.462517] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:20:36.384 [2024-05-15 09:00:52.521619] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:36.384 [2024-05-15 09:00:52.521671] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:36.384 [2024-05-15 09:00:52.521680] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:36.384 [2024-05-15 09:00:52.521704] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:36.384 [2024-05-15 09:00:52.521773] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:36.384 [2024-05-15 09:00:52.521785] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:36.384 [2024-05-15 09:00:52.521791] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:36.384 [2024-05-15 09:00:52.521807] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:36.384 [2024-05-15 09:00:52.567376] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:36.384 [2024-05-15 09:00:52.567424] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:36.384 [2024-05-15 09:00:52.567473] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:36.384 [2024-05-15 09:00:52.567483] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.318 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=0 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=4 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.578 [2024-05-15 09:00:53.623009] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:37.578 [2024-05-15 09:00:53.623050] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:37.578 [2024-05-15 09:00:53.623089] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:37.578 [2024-05-15 09:00:53.623103] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.578 [2024-05-15 09:00:53.629305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.578 [2024-05-15 09:00:53.629351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.578 [2024-05-15 09:00:53.629367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.578 [2024-05-15 09:00:53.629381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.578 [2024-05-15 09:00:53.629397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.578 [2024-05-15 09:00:53.629410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.578 [2024-05-15 09:00:53.629425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.578 [2024-05-15 09:00:53.629440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.578 [2024-05-15 09:00:53.629450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.578 [2024-05-15 09:00:53.631010] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:37.578 [2024-05-15 09:00:53.631118] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:37.578 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.578 [2024-05-15 09:00:53.635281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.578 [2024-05-15 09:00:53.635315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.578 [2024-05-15 09:00:53.635329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.579 [2024-05-15 09:00:53.635339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.579 [2024-05-15 09:00:53.635349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.579 [2024-05-15 09:00:53.635359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.579 [2024-05-15 09:00:53.635369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.579 [2024-05-15 09:00:53.635378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.579 [2024-05-15 09:00:53.635388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.579 09:00:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:20:37.579 [2024-05-15 09:00:53.639245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.645242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.649268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.579 [2024-05-15 09:00:53.649437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.649470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.579 [2024-05-15 09:00:53.649492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.649523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.649542] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.649552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.649600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.579 [2024-05-15 09:00:53.649631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.655257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:37.579 [2024-05-15 09:00:53.655400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.655426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee4e0 with addr=10.0.0.3, port=4420 00:20:37.579 [2024-05-15 09:00:53.655439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.655458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.655474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.655483] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.655494] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:37.579 [2024-05-15 09:00:53.655516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.659354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.579 [2024-05-15 09:00:53.659469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.659493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.579 [2024-05-15 09:00:53.659505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.659522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.659538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.659547] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.659590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.579 [2024-05-15 09:00:53.659609] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.665338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:37.579 [2024-05-15 09:00:53.665469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.665496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee4e0 with addr=10.0.0.3, port=4420 00:20:37.579 [2024-05-15 09:00:53.665509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.665528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.665546] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.665590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.665610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:37.579 [2024-05-15 09:00:53.665629] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.669430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.579 [2024-05-15 09:00:53.669548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.669608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.579 [2024-05-15 09:00:53.669621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.669639] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.669654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.669665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.669675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.579 [2024-05-15 09:00:53.669691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.675407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:37.579 [2024-05-15 09:00:53.675536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.675596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee4e0 with addr=10.0.0.3, port=4420 00:20:37.579 [2024-05-15 09:00:53.675618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.675637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.675655] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.675670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.675682] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:37.579 [2024-05-15 09:00:53.675699] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.679503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.579 [2024-05-15 09:00:53.679652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.679688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.579 [2024-05-15 09:00:53.679706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.679725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.679744] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.679760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.679771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.579 [2024-05-15 09:00:53.679787] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.685484] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:37.579 [2024-05-15 09:00:53.685605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.685630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee4e0 with addr=10.0.0.3, port=4420 00:20:37.579 [2024-05-15 09:00:53.685642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.685660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.685675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.685684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.685699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:37.579 [2024-05-15 09:00:53.685733] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.689602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.579 [2024-05-15 09:00:53.689699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.689722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.579 [2024-05-15 09:00:53.689733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.689750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.689765] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.689775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.689784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.579 [2024-05-15 09:00:53.689799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.695550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:37.579 [2024-05-15 09:00:53.695661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.695684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee4e0 with addr=10.0.0.3, port=4420 00:20:37.579 [2024-05-15 09:00:53.695695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.695711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.695726] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.695735] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.695744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:37.579 [2024-05-15 09:00:53.695760] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.699662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.579 [2024-05-15 09:00:53.699752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.699774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.579 [2024-05-15 09:00:53.699785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.699801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.699816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.699825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.699834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.579 [2024-05-15 09:00:53.699850] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.705627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:37.579 [2024-05-15 09:00:53.705727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.705750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee4e0 with addr=10.0.0.3, port=4420 00:20:37.579 [2024-05-15 09:00:53.705761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.705778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.705793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.705802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.705812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:37.579 [2024-05-15 09:00:53.705827] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.709721] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.579 [2024-05-15 09:00:53.709811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.709833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.579 [2024-05-15 09:00:53.709844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.709860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.709875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.709884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.709894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.579 [2024-05-15 09:00:53.709915] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.715694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:37.579 [2024-05-15 09:00:53.715787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.715809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee4e0 with addr=10.0.0.3, port=4420 00:20:37.579 [2024-05-15 09:00:53.715820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.715836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.715851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.715860] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.715870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:37.579 [2024-05-15 09:00:53.715886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.719781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.579 [2024-05-15 09:00:53.719883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.719907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.579 [2024-05-15 09:00:53.719918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.719935] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.719950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.719959] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.719969] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.579 [2024-05-15 09:00:53.719984] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.725755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:37.579 [2024-05-15 09:00:53.725897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.725922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee4e0 with addr=10.0.0.3, port=4420 00:20:37.579 [2024-05-15 09:00:53.725934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.725951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.725967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.725976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.725986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:37.579 [2024-05-15 09:00:53.726002] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.729849] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.579 [2024-05-15 09:00:53.729946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.729969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.579 [2024-05-15 09:00:53.729981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.729997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.730013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.730022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.730032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.579 [2024-05-15 09:00:53.730048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.735854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:37.579 [2024-05-15 09:00:53.735951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.735974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee4e0 with addr=10.0.0.3, port=4420 00:20:37.579 [2024-05-15 09:00:53.735985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.579 [2024-05-15 09:00:53.736001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.579 [2024-05-15 09:00:53.736016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:37.579 [2024-05-15 09:00:53.736025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:37.579 [2024-05-15 09:00:53.736034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:37.579 [2024-05-15 09:00:53.736050] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.579 [2024-05-15 09:00:53.739909] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.579 [2024-05-15 09:00:53.739998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.579 [2024-05-15 09:00:53.740019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.579 [2024-05-15 09:00:53.740030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.580 [2024-05-15 09:00:53.740047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.580 [2024-05-15 09:00:53.740061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.580 [2024-05-15 09:00:53.740083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.580 [2024-05-15 09:00:53.740093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.580 [2024-05-15 09:00:53.740109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.580 [2024-05-15 09:00:53.745917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:37.580 [2024-05-15 09:00:53.746013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.580 [2024-05-15 09:00:53.746035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee4e0 with addr=10.0.0.3, port=4420 00:20:37.580 [2024-05-15 09:00:53.746047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.580 [2024-05-15 09:00:53.746063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.580 [2024-05-15 09:00:53.746078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:37.580 [2024-05-15 09:00:53.746086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:37.580 [2024-05-15 09:00:53.746096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:37.580 [2024-05-15 09:00:53.746111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.580 [2024-05-15 09:00:53.749966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.580 [2024-05-15 09:00:53.750058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.580 [2024-05-15 09:00:53.750080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.580 [2024-05-15 09:00:53.750091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.580 [2024-05-15 09:00:53.750107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.580 [2024-05-15 09:00:53.750121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.580 [2024-05-15 09:00:53.750131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.580 [2024-05-15 09:00:53.750140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.580 [2024-05-15 09:00:53.750155] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.580 [2024-05-15 09:00:53.755980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:37.580 [2024-05-15 09:00:53.756116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.580 [2024-05-15 09:00:53.756139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee4e0 with addr=10.0.0.3, port=4420 00:20:37.580 [2024-05-15 09:00:53.756151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee4e0 is same with the state(5) to be set 00:20:37.580 [2024-05-15 09:00:53.756169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee4e0 (9): Bad file descriptor 00:20:37.580 [2024-05-15 09:00:53.756202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:37.580 [2024-05-15 09:00:53.756213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:37.580 [2024-05-15 09:00:53.756223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:37.580 [2024-05-15 09:00:53.756239] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.580 [2024-05-15 09:00:53.760024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.580 [2024-05-15 09:00:53.760143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.580 [2024-05-15 09:00:53.760167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2bc40 with addr=10.0.0.2, port=4420 00:20:37.580 [2024-05-15 09:00:53.760178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bc40 is same with the state(5) to be set 00:20:37.580 [2024-05-15 09:00:53.760196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bc40 (9): Bad file descriptor 00:20:37.580 [2024-05-15 09:00:53.760228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.580 [2024-05-15 09:00:53.760239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.580 [2024-05-15 09:00:53.760249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.580 [2024-05-15 09:00:53.760265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.580 [2024-05-15 09:00:53.761246] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:20:37.580 [2024-05-15 09:00:53.761276] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:37.580 [2024-05-15 09:00:53.761314] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:37.580 [2024-05-15 09:00:53.762220] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:37.580 [2024-05-15 09:00:53.762242] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:37.580 [2024-05-15 09:00:53.762261] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:37.838 [2024-05-15 09:00:53.847339] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:37.838 [2024-05-15 09:00:53.848313] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:20:38.771 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=0 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=4 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.772 09:00:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:20:38.772 [2024-05-15 09:00:54.981840] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:40.190 09:00:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:20:40.190 09:00:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:40.190 09:00:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.190 09:00:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.190 09:00:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:20:40.190 09:00:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # sort 00:20:40.190 09:00:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # xargs 00:20:40.190 09:00:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.190 09:00:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:20:40.190 09:00:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:20:40.190 09:00:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=4 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=8 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.190 [2024-05-15 09:00:56.183655] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:20:40.190 2024/05/15 09:00:56 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:40.190 request: 00:20:40.190 { 00:20:40.190 "method": "bdev_nvme_start_mdns_discovery", 00:20:40.190 "params": { 00:20:40.190 "name": "mdns", 00:20:40.190 "svcname": "_nvme-disc._http", 00:20:40.190 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:40.190 } 00:20:40.190 } 00:20:40.190 Got JSON-RPC error response 00:20:40.190 GoRPCClient: error on JSON-RPC call 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:40.190 09:00:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:20:40.449 [2024-05-15 09:00:56.572218] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:40.449 [2024-05-15 09:00:56.672210] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:40.708 [2024-05-15 09:00:56.772223] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:20:40.708 [2024-05-15 09:00:56.772269] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:20:40.708 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:20:40.708 cookie is 0 00:20:40.708 is_local: 1 00:20:40.708 our_own: 0 00:20:40.708 wide_area: 0 00:20:40.708 multicast: 1 00:20:40.708 cached: 1 00:20:40.708 [2024-05-15 09:00:56.872221] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:20:40.708 [2024-05-15 09:00:56.872261] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:20:40.708 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:20:40.708 cookie is 0 00:20:40.708 is_local: 1 00:20:40.708 our_own: 0 00:20:40.708 wide_area: 0 00:20:40.708 multicast: 1 00:20:40.708 cached: 1 00:20:41.643 [2024-05-15 09:00:57.780570] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:41.643 [2024-05-15 09:00:57.780608] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:41.643 [2024-05-15 09:00:57.780628] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:41.643 [2024-05-15 09:00:57.867737] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:20:41.900 [2024-05-15 09:00:57.880381] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:41.900 [2024-05-15 09:00:57.880417] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:41.900 [2024-05-15 09:00:57.880448] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:41.900 [2024-05-15 09:00:57.932556] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:41.900 [2024-05-15 09:00:57.932613] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:41.900 [2024-05-15 09:00:57.965740] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:20:41.900 [2024-05-15 09:00:58.025025] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:41.900 [2024-05-15 09:00:58.025070] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # sort 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # xargs 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # sort 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # xargs 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.183 [2024-05-15 09:01:01.378970] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:20:45.183 2024/05/15 09:01:01 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:45.183 request: 00:20:45.183 { 00:20:45.183 "method": "bdev_nvme_start_mdns_discovery", 00:20:45.183 "params": { 00:20:45.183 "name": "cdc", 00:20:45.183 "svcname": "_nvme-disc._tcp", 00:20:45.183 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:45.183 } 00:20:45.183 } 00:20:45.183 Got JSON-RPC error response 00:20:45.183 GoRPCClient: error on JSON-RPC call 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # sort 00:20:45.183 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.184 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:20:45.184 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # xargs 00:20:45.184 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # kill 87965 00:20:45.441 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # wait 87965 00:20:45.441 [2024-05-15 09:01:01.572203] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # kill 88045 00:20:45.699 Got SIGTERM, quitting. 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # kill 87993 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:20:45.699 Got SIGTERM, quitting. 00:20:45.699 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:45.699 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:45.699 avahi-daemon 0.8 exiting. 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:45.699 rmmod nvme_tcp 00:20:45.699 rmmod nvme_fabrics 00:20:45.699 rmmod nvme_keyring 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 87915 ']' 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 87915 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@946 -- # '[' -z 87915 ']' 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # kill -0 87915 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # uname 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87915 00:20:45.699 killing process with pid 87915 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87915' 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@965 -- # kill 87915 00:20:45.699 [2024-05-15 09:01:01.800211] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:45.699 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@970 -- # wait 87915 00:20:45.957 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:45.957 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:45.957 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:45.957 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.957 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:45.957 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.957 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.957 09:01:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.957 09:01:02 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:45.957 ************************************ 00:20:45.957 END TEST nvmf_mdns_discovery 00:20:45.957 ************************************ 00:20:45.957 00:20:45.957 real 0m20.571s 00:20:45.957 user 0m40.472s 00:20:45.957 sys 0m1.881s 00:20:45.957 09:01:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:45.957 09:01:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.957 09:01:02 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 1 -eq 1 ]] 00:20:45.957 09:01:02 nvmf_tcp -- nvmf/nvmf.sh@115 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:45.957 09:01:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:45.957 09:01:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:45.957 09:01:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:45.957 ************************************ 00:20:45.957 START TEST nvmf_host_multipath 00:20:45.957 ************************************ 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:45.957 * Looking for test storage... 00:20:45.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.957 09:01:02 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.958 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:46.216 Cannot find device "nvmf_tgt_br" 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.216 Cannot find device "nvmf_tgt_br2" 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:46.216 Cannot find device "nvmf_tgt_br" 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:46.216 Cannot find device "nvmf_tgt_br2" 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.216 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.216 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:46.216 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:46.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:20:46.474 00:20:46.474 --- 10.0.0.2 ping statistics --- 00:20:46.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.474 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:46.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:20:46.474 00:20:46.474 --- 10.0.0.3 ping statistics --- 00:20:46.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.474 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:46.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:46.474 00:20:46.474 --- 10.0.0.1 ping statistics --- 00:20:46.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.474 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=88552 00:20:46.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 88552 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 88552 ']' 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:46.474 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:46.474 [2024-05-15 09:01:02.610275] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:20:46.474 [2024-05-15 09:01:02.610367] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.733 [2024-05-15 09:01:02.750911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:46.733 [2024-05-15 09:01:02.812101] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.733 [2024-05-15 09:01:02.812341] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.733 [2024-05-15 09:01:02.812543] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.733 [2024-05-15 09:01:02.812702] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.733 [2024-05-15 09:01:02.812933] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.733 [2024-05-15 09:01:02.813090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.733 [2024-05-15 09:01:02.813095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.733 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:46.733 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:20:46.733 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.733 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.733 09:01:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:46.733 09:01:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.733 09:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=88552 00:20:46.733 09:01:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:46.990 [2024-05-15 09:01:03.145805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.990 09:01:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:47.247 Malloc0 00:20:47.247 09:01:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:47.812 09:01:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:47.812 09:01:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.069 [2024-05-15 09:01:04.248382] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:48.069 [2024-05-15 09:01:04.248680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.069 09:01:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:48.327 [2024-05-15 09:01:04.548767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:48.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.586 09:01:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=88640 00:20:48.586 09:01:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.586 09:01:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:48.586 09:01:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 88640 /var/tmp/bdevperf.sock 00:20:48.586 09:01:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 88640 ']' 00:20:48.586 09:01:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.586 09:01:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:48.586 09:01:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.586 09:01:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:48.586 09:01:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:48.844 09:01:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:48.844 09:01:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:20:48.844 09:01:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:49.102 09:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:49.361 Nvme0n1 00:20:49.361 09:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:49.927 Nvme0n1 00:20:49.927 09:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:49.927 09:01:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:50.868 09:01:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:50.868 09:01:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:51.126 09:01:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:51.385 09:01:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:51.385 09:01:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88716 00:20:51.385 09:01:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:51.385 09:01:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88552 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:57.945 Attaching 4 probes... 00:20:57.945 @path[10.0.0.2, 4421]: 16443 00:20:57.945 @path[10.0.0.2, 4421]: 16731 00:20:57.945 @path[10.0.0.2, 4421]: 16388 00:20:57.945 @path[10.0.0.2, 4421]: 16867 00:20:57.945 @path[10.0.0.2, 4421]: 16861 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88716 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:57.945 09:01:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:57.945 09:01:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:58.203 09:01:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:58.203 09:01:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88853 00:20:58.203 09:01:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88552 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:58.203 09:01:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:04.758 Attaching 4 probes... 00:21:04.758 @path[10.0.0.2, 4420]: 15293 00:21:04.758 @path[10.0.0.2, 4420]: 17019 00:21:04.758 @path[10.0.0.2, 4420]: 16562 00:21:04.758 @path[10.0.0.2, 4420]: 16719 00:21:04.758 @path[10.0.0.2, 4420]: 16742 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88853 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:04.758 09:01:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:05.016 09:01:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:05.016 09:01:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88984 00:21:05.016 09:01:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88552 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:05.016 09:01:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:11.590 Attaching 4 probes... 00:21:11.590 @path[10.0.0.2, 4421]: 12711 00:21:11.590 @path[10.0.0.2, 4421]: 16813 00:21:11.590 @path[10.0.0.2, 4421]: 16653 00:21:11.590 @path[10.0.0.2, 4421]: 16758 00:21:11.590 @path[10.0.0.2, 4421]: 16786 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88984 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:11.590 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:11.848 09:01:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:12.105 09:01:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:12.105 09:01:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89119 00:21:12.105 09:01:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88552 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:12.105 09:01:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:18.661 Attaching 4 probes... 00:21:18.661 00:21:18.661 00:21:18.661 00:21:18.661 00:21:18.661 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89119 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:18.661 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:18.919 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:18.919 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89249 00:21:18.919 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88552 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:18.919 09:01:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:25.627 09:01:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:25.627 09:01:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:25.627 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:25.627 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:25.627 Attaching 4 probes... 00:21:25.627 @path[10.0.0.2, 4421]: 14823 00:21:25.627 @path[10.0.0.2, 4421]: 14612 00:21:25.627 @path[10.0.0.2, 4421]: 15851 00:21:25.627 @path[10.0.0.2, 4421]: 16099 00:21:25.627 @path[10.0.0.2, 4421]: 15862 00:21:25.627 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:25.627 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:25.627 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:25.627 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:25.627 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:25.627 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:25.627 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89249 00:21:25.627 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:25.627 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:25.627 [2024-05-15 09:01:41.457983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.627 [2024-05-15 09:01:41.458472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.458996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 [2024-05-15 09:01:41.459124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8840 is same with the state(5) to be set 00:21:25.628 09:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:26.566 09:01:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:26.566 09:01:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89386 00:21:26.566 09:01:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88552 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:26.566 09:01:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.127 Attaching 4 probes... 00:21:33.127 @path[10.0.0.2, 4420]: 15795 00:21:33.127 @path[10.0.0.2, 4420]: 15641 00:21:33.127 @path[10.0.0.2, 4420]: 16592 00:21:33.127 @path[10.0.0.2, 4420]: 16304 00:21:33.127 @path[10.0.0.2, 4420]: 14411 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89386 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:33.127 [2024-05-15 09:01:48.969341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:33.127 09:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:33.127 09:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:39.681 09:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:39.681 09:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89573 00:21:39.681 09:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88552 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:39.681 09:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.248 Attaching 4 probes... 00:21:46.248 @path[10.0.0.2, 4421]: 15691 00:21:46.248 @path[10.0.0.2, 4421]: 16147 00:21:46.248 @path[10.0.0.2, 4421]: 15417 00:21:46.248 @path[10.0.0.2, 4421]: 15049 00:21:46.248 @path[10.0.0.2, 4421]: 15516 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89573 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 88640 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 88640 ']' 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 88640 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88640 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:46.248 killing process with pid 88640 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88640' 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 88640 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 88640 00:21:46.248 Connection closed with partial response: 00:21:46.248 00:21:46.248 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 88640 00:21:46.248 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:46.248 [2024-05-15 09:01:04.621774] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:46.248 [2024-05-15 09:01:04.621985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88640 ] 00:21:46.248 [2024-05-15 09:01:04.761547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.248 [2024-05-15 09:01:04.832632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.248 Running I/O for 90 seconds... 00:21:46.248 [2024-05-15 09:01:14.365590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.365650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.365709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.365731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.365755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.365771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.365793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.365809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.365830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.365845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.365867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.365882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.365904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.365925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.365947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.365962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.365983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.365998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.248 [2024-05-15 09:01:14.366545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.248 [2024-05-15 09:01:14.366591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.366609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.366631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.366646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.366668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.366683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.366704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.366719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.366740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.366756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.366777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.366792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.366813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.366828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.366850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.366865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.366887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.366902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.366923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.366938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.366961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.366977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.367578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.367607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.367648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.367667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.367689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.367705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.367726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.367745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.367767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.367782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.367804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.367818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.367839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.367854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.367876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.367890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.367912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.367926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.367949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.367964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.367986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.368973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.368994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.369009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.369031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.369045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.369074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.369094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.369116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.369131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.369153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.369168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.369190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.369205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.369227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.369241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.369263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.369279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.369300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.369315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.369337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.369351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.370522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.370574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.370626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.370662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.370714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.249 [2024-05-15 09:01:14.370751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.370788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.370824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.370863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.370900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.370936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.370973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.370994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.371009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.371031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.371046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.371067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.371082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.371103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.371118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.371140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.371172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.249 [2024-05-15 09:01:14.371195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.249 [2024-05-15 09:01:14.371210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:14.371231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.250 [2024-05-15 09:01:14.371246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:14.371267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.250 [2024-05-15 09:01:14.371282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:14.371304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.250 [2024-05-15 09:01:14.371319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:14.372125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.250 [2024-05-15 09:01:14.372154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.933975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.933997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.934033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.934068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.934103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.934138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.934175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.934211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.934255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.934293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.934329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.934365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.934401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.934415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.935628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.935659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.935685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.935701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.935725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.935740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.935764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.935779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.935803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.935818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.935842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.935856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.935880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.935895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.935929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.935945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.936971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.936996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.937011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.937037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.937052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.937077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.937093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.937118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.937133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.937159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.937174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.937200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.937214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.937240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.937261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.937289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.937304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.250 [2024-05-15 09:01:20.937330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.250 [2024-05-15 09:01:20.937345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.937976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.937991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.938032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.938073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.938120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.938160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.938201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.938242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:20.938400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.938961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.938977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.939005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.939028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.939059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.939075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:20.939104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:20.939120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.065857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.065938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.065998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.066575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.066594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.067433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.067481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.067520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.067558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.067617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.067655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.067710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.067748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.067786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.067824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.067862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.067900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.067938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.067975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.067998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.068014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.068036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.068051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.068087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.068105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.068128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.068143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.068174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.251 [2024-05-15 09:01:28.068195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.068218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.068234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.068256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.068272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.068294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.068309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.251 [2024-05-15 09:01:28.068331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.251 [2024-05-15 09:01:28.068346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.068820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.068836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.069982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.069999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.070960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.070975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.252 [2024-05-15 09:01:28.071827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.252 [2024-05-15 09:01:28.071842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.071868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.071883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.071909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.071924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.071950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.071965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.071991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.072006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.072032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.072047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.072093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.072111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.072138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.072153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.072179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.072213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.072240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.072256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.072282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.072297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.072324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.072339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.072365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.072380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:28.072407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.253 [2024-05-15 09:01:28.072422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.458972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.458998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.459981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.459996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.253 [2024-05-15 09:01:41.460914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.253 [2024-05-15 09:01:41.460927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.460942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.460956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.460971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.460984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.460999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.461978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.461994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.254 [2024-05-15 09:01:41.462598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.462981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.462996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.463016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.463031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.463044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.463060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.463073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.463089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.463102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.463117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.254 [2024-05-15 09:01:41.463131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.465308] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2158310 was disconnected and freed. reset controller. 00:21:46.254 [2024-05-15 09:01:41.465444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.254 [2024-05-15 09:01:41.465484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.465501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.254 [2024-05-15 09:01:41.465515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.465529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.254 [2024-05-15 09:01:41.465542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.254 [2024-05-15 09:01:41.465556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.254 [2024-05-15 09:01:41.465589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.255 [2024-05-15 09:01:41.465606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.255 [2024-05-15 09:01:41.465619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.255 [2024-05-15 09:01:41.465640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a9f0 is same with the state(5) to be set 00:21:46.255 [2024-05-15 09:01:41.467051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.255 [2024-05-15 09:01:41.467093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212a9f0 (9): Bad file descriptor 00:21:46.255 [2024-05-15 09:01:41.467269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.255 [2024-05-15 09:01:41.467301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x212a9f0 with addr=10.0.0.2, port=4421 00:21:46.255 [2024-05-15 09:01:41.467318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a9f0 is same with the state(5) to be set 00:21:46.255 [2024-05-15 09:01:41.467343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212a9f0 (9): Bad file descriptor 00:21:46.255 [2024-05-15 09:01:41.467365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.255 [2024-05-15 09:01:41.467379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.255 [2024-05-15 09:01:41.467393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.255 [2024-05-15 09:01:41.467418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.255 [2024-05-15 09:01:41.467431] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.255 [2024-05-15 09:01:51.576858] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.255 Received shutdown signal, test time was about 55.571809 seconds 00:21:46.255 00:21:46.255 Latency(us) 00:21:46.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.255 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:46.255 Verification LBA range: start 0x0 length 0x4000 00:21:46.255 Nvme0n1 : 55.57 6935.65 27.09 0.00 0.00 18421.25 1534.14 7046430.72 00:21:46.255 =================================================================================================================== 00:21:46.255 Total : 6935.65 27.09 0.00 0.00 18421.25 1534.14 7046430.72 00:21:46.255 09:02:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:46.255 rmmod nvme_tcp 00:21:46.255 rmmod nvme_fabrics 00:21:46.255 rmmod nvme_keyring 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 88552 ']' 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 88552 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 88552 ']' 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 88552 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88552 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:46.255 killing process with pid 88552 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88552' 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 88552 00:21:46.255 [2024-05-15 09:02:02.308887] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:46.255 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 88552 00:21:46.514 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:46.514 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:46.514 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:46.514 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.514 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:46.514 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.514 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.514 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.514 09:02:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:46.514 00:21:46.514 real 1m0.474s 00:21:46.514 user 2m52.246s 00:21:46.514 sys 0m13.324s 00:21:46.514 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:46.514 09:02:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:46.514 ************************************ 00:21:46.514 END TEST nvmf_host_multipath 00:21:46.514 ************************************ 00:21:46.514 09:02:02 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:46.514 09:02:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:46.514 09:02:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:46.514 09:02:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.514 ************************************ 00:21:46.514 START TEST nvmf_timeout 00:21:46.514 ************************************ 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:46.514 * Looking for test storage... 00:21:46.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:46.514 Cannot find device "nvmf_tgt_br" 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:46.514 Cannot find device "nvmf_tgt_br2" 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:46.514 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:46.773 Cannot find device "nvmf_tgt_br" 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:46.773 Cannot find device "nvmf_tgt_br2" 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:46.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:46.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:46.773 09:02:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:47.106 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:47.106 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:47.106 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:47.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:21:47.106 00:21:47.106 --- 10.0.0.2 ping statistics --- 00:21:47.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.106 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:47.106 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:47.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:47.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:21:47.106 00:21:47.106 --- 10.0.0.3 ping statistics --- 00:21:47.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.106 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:47.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:21:47.107 00:21:47.107 --- 10.0.0.1 ping statistics --- 00:21:47.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.107 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=89901 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 89901 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 89901 ']' 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:47.107 09:02:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:47.107 [2024-05-15 09:02:03.118153] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:47.107 [2024-05-15 09:02:03.118258] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.107 [2024-05-15 09:02:03.255229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:47.378 [2024-05-15 09:02:03.313430] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.378 [2024-05-15 09:02:03.313671] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.378 [2024-05-15 09:02:03.313812] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.378 [2024-05-15 09:02:03.313931] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.378 [2024-05-15 09:02:03.314068] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.378 [2024-05-15 09:02:03.314234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.378 [2024-05-15 09:02:03.314241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.945 09:02:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:47.945 09:02:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:21:47.945 09:02:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.945 09:02:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.945 09:02:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:47.945 09:02:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.945 09:02:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:47.945 09:02:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:48.205 [2024-05-15 09:02:04.375832] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.205 09:02:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:48.464 Malloc0 00:21:48.464 09:02:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:49.031 09:02:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:49.031 09:02:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.290 [2024-05-15 09:02:05.424673] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:49.290 [2024-05-15 09:02:05.425400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.290 09:02:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:49.290 09:02:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=89992 00:21:49.290 09:02:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 89992 /var/tmp/bdevperf.sock 00:21:49.290 09:02:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 89992 ']' 00:21:49.290 09:02:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.290 09:02:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:49.290 09:02:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.290 09:02:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:49.290 09:02:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:49.290 [2024-05-15 09:02:05.491219] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:49.290 [2024-05-15 09:02:05.491301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89992 ] 00:21:49.549 [2024-05-15 09:02:05.628750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.549 [2024-05-15 09:02:05.701325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.808 09:02:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:49.808 09:02:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:21:49.808 09:02:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:50.068 09:02:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:50.326 NVMe0n1 00:21:50.326 09:02:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=90021 00:21:50.326 09:02:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:50.326 09:02:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:50.326 Running I/O for 10 seconds... 00:21:51.260 09:02:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.522 [2024-05-15 09:02:07.689371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.689997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.690005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.690013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.690021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.690029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.690038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.522 [2024-05-15 09:02:07.690046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.523 [2024-05-15 09:02:07.690054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.523 [2024-05-15 09:02:07.690062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.523 [2024-05-15 09:02:07.690070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cab50 is same with the state(5) to be set 00:21:51.523 [2024-05-15 09:02:07.690252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.690988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.690997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.691009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.691018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.691029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.691038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.691050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.691059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.691070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.691080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.691095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.691105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.523 [2024-05-15 09:02:07.691116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.523 [2024-05-15 09:02:07.691126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.524 [2024-05-15 09:02:07.691969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.524 [2024-05-15 09:02:07.691980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.691989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.525 [2024-05-15 09:02:07.692855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.525 [2024-05-15 09:02:07.692864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.692875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.526 [2024-05-15 09:02:07.692884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.692895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.526 [2024-05-15 09:02:07.692904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.692915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.526 [2024-05-15 09:02:07.692924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.692936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.526 [2024-05-15 09:02:07.692945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.692956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.526 [2024-05-15 09:02:07.692965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.692976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.526 [2024-05-15 09:02:07.692985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.692996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.526 [2024-05-15 09:02:07.693005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.693016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8837b0 is same with the state(5) to be set 00:21:51.526 [2024-05-15 09:02:07.693029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:51.526 [2024-05-15 09:02:07.693037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:51.526 [2024-05-15 09:02:07.693046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81248 len:8 PRP1 0x0 PRP2 0x0 00:21:51.526 [2024-05-15 09:02:07.693055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.693098] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8837b0 was disconnected and freed. reset controller. 00:21:51.526 [2024-05-15 09:02:07.693184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.526 [2024-05-15 09:02:07.693200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.693210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.526 [2024-05-15 09:02:07.693220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.693229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.526 [2024-05-15 09:02:07.693238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.693250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.526 [2024-05-15 09:02:07.693260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.526 [2024-05-15 09:02:07.693269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x814a00 is same with the state(5) to be set 00:21:51.526 [2024-05-15 09:02:07.693493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:51.526 [2024-05-15 09:02:07.693514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x814a00 (9): Bad file descriptor 00:21:51.526 [2024-05-15 09:02:07.693621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.526 [2024-05-15 09:02:07.693645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x814a00 with addr=10.0.0.2, port=4420 00:21:51.526 [2024-05-15 09:02:07.693655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x814a00 is same with the state(5) to be set 00:21:51.526 [2024-05-15 09:02:07.693673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x814a00 (9): Bad file descriptor 00:21:51.526 [2024-05-15 09:02:07.693689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:51.526 [2024-05-15 09:02:07.693699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:51.526 [2024-05-15 09:02:07.693709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:51.526 [2024-05-15 09:02:07.693728] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.526 [2024-05-15 09:02:07.709868] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:51.526 09:02:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:54.057 [2024-05-15 09:02:09.710057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.057 [2024-05-15 09:02:09.710134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x814a00 with addr=10.0.0.2, port=4420 00:21:54.057 [2024-05-15 09:02:09.710152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x814a00 is same with the state(5) to be set 00:21:54.057 [2024-05-15 09:02:09.710192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x814a00 (9): Bad file descriptor 00:21:54.057 [2024-05-15 09:02:09.710212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:54.057 [2024-05-15 09:02:09.710222] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:54.057 [2024-05-15 09:02:09.710232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:54.057 [2024-05-15 09:02:09.710260] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.057 [2024-05-15 09:02:09.710272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:54.057 09:02:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:54.057 09:02:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:54.057 09:02:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:54.057 09:02:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:54.057 09:02:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:54.057 09:02:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:54.057 09:02:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:54.315 09:02:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:54.315 09:02:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:55.690 [2024-05-15 09:02:11.710425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.690 [2024-05-15 09:02:11.710500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x814a00 with addr=10.0.0.2, port=4420 00:21:55.690 [2024-05-15 09:02:11.710518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x814a00 is same with the state(5) to be set 00:21:55.690 [2024-05-15 09:02:11.710546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x814a00 (9): Bad file descriptor 00:21:55.690 [2024-05-15 09:02:11.710593] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:55.690 [2024-05-15 09:02:11.710605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:55.690 [2024-05-15 09:02:11.710616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:55.690 [2024-05-15 09:02:11.710646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.690 [2024-05-15 09:02:11.710658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:57.592 [2024-05-15 09:02:13.710713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:58.528 00:21:58.528 Latency(us) 00:21:58.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.528 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:58.528 Verification LBA range: start 0x0 length 0x4000 00:21:58.528 NVMe0n1 : 8.14 1231.64 4.81 15.72 0.00 102459.83 2487.39 7046430.72 00:21:58.528 =================================================================================================================== 00:21:58.528 Total : 1231.64 4.81 15.72 0.00 102459.83 2487.39 7046430.72 00:21:58.528 0 00:21:59.462 09:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:59.462 09:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:59.462 09:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:59.721 09:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:59.721 09:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:59.721 09:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:59.721 09:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:59.979 09:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:59.979 09:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 90021 00:21:59.979 09:02:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 89992 00:21:59.979 09:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 89992 ']' 00:21:59.979 09:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 89992 00:21:59.979 09:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:21:59.979 09:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:59.979 09:02:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89992 00:21:59.979 killing process with pid 89992 00:21:59.979 Received shutdown signal, test time was about 9.464038 seconds 00:21:59.979 00:21:59.979 Latency(us) 00:21:59.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.979 =================================================================================================================== 00:21:59.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.979 09:02:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:59.979 09:02:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:59.979 09:02:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89992' 00:21:59.979 09:02:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 89992 00:21:59.979 09:02:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 89992 00:21:59.979 09:02:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.238 [2024-05-15 09:02:16.420782] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.238 09:02:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=90183 00:22:00.238 09:02:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:00.238 09:02:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 90183 /var/tmp/bdevperf.sock 00:22:00.238 09:02:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 90183 ']' 00:22:00.238 09:02:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.238 09:02:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:00.238 09:02:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.238 09:02:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:00.238 09:02:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:00.497 [2024-05-15 09:02:16.483629] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:22:00.497 [2024-05-15 09:02:16.483717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90183 ] 00:22:00.497 [2024-05-15 09:02:16.616586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.497 [2024-05-15 09:02:16.692837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.430 09:02:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:01.430 09:02:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:22:01.430 09:02:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:01.698 09:02:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:01.984 NVMe0n1 00:22:01.984 09:02:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=90232 00:22:01.984 09:02:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.984 09:02:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:01.984 Running I/O for 10 seconds... 00:22:02.919 09:02:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.488 [2024-05-15 09:02:19.466907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.466971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.466983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.466993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.488 [2024-05-15 09:02:19.467101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac66a0 is same with the state(5) to be set 00:22:03.489 [2024-05-15 09:02:19.467866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.489 [2024-05-15 09:02:19.467898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.489 [2024-05-15 09:02:19.467921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.489 [2024-05-15 09:02:19.467932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.489 [2024-05-15 09:02:19.467944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.489 [2024-05-15 09:02:19.467954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.489 [2024-05-15 09:02:19.467966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.489 [2024-05-15 09:02:19.467975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.489 [2024-05-15 09:02:19.467987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.489 [2024-05-15 09:02:19.467996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.489 [2024-05-15 09:02:19.468007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.489 [2024-05-15 09:02:19.468017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.489 [2024-05-15 09:02:19.468029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.489 [2024-05-15 09:02:19.468038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.489 [2024-05-15 09:02:19.468050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.489 [2024-05-15 09:02:19.468059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.489 [2024-05-15 09:02:19.468070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.490 [2024-05-15 09:02:19.468927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.490 [2024-05-15 09:02:19.468959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.490 [2024-05-15 09:02:19.468968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.468979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.468988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.491 [2024-05-15 09:02:19.469305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.491 [2024-05-15 09:02:19.469839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.491 [2024-05-15 09:02:19.469850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.469860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.469871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.469881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.469892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.469902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.469913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.469922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.469944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.469953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.469964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.469974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.469985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.469994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.492 [2024-05-15 09:02:19.470678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2a690 is same with the state(5) to be set 00:22:03.492 [2024-05-15 09:02:19.470700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.492 [2024-05-15 09:02:19.470708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.492 [2024-05-15 09:02:19.470716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88976 len:8 PRP1 0x0 PRP2 0x0 00:22:03.492 [2024-05-15 09:02:19.470728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.492 [2024-05-15 09:02:19.470772] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb2a690 was disconnected and freed. reset controller. 00:22:03.493 [2024-05-15 09:02:19.470877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.493 [2024-05-15 09:02:19.470894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.493 [2024-05-15 09:02:19.470905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.493 [2024-05-15 09:02:19.470915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.493 [2024-05-15 09:02:19.470924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.493 [2024-05-15 09:02:19.470934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.493 [2024-05-15 09:02:19.470944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.493 [2024-05-15 09:02:19.470953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.493 [2024-05-15 09:02:19.470962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabba00 is same with the state(5) to be set 00:22:03.493 [2024-05-15 09:02:19.471194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.493 [2024-05-15 09:02:19.471225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabba00 (9): Bad file descriptor 00:22:03.493 [2024-05-15 09:02:19.471329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.493 [2024-05-15 09:02:19.471352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabba00 with addr=10.0.0.2, port=4420 00:22:03.493 [2024-05-15 09:02:19.471362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabba00 is same with the state(5) to be set 00:22:03.493 [2024-05-15 09:02:19.471380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabba00 (9): Bad file descriptor 00:22:03.493 [2024-05-15 09:02:19.471396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.493 [2024-05-15 09:02:19.471405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.493 [2024-05-15 09:02:19.471416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.493 [2024-05-15 09:02:19.471436] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.493 [2024-05-15 09:02:19.487510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.493 09:02:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:04.428 [2024-05-15 09:02:20.487776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.428 [2024-05-15 09:02:20.487864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabba00 with addr=10.0.0.2, port=4420 00:22:04.428 [2024-05-15 09:02:20.487885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabba00 is same with the state(5) to be set 00:22:04.428 [2024-05-15 09:02:20.487918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabba00 (9): Bad file descriptor 00:22:04.428 [2024-05-15 09:02:20.487942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.428 [2024-05-15 09:02:20.487954] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.428 [2024-05-15 09:02:20.487970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.428 [2024-05-15 09:02:20.488005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.428 [2024-05-15 09:02:20.488021] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.428 09:02:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.686 [2024-05-15 09:02:20.896142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.943 09:02:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 90232 00:22:05.508 [2024-05-15 09:02:21.500672] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:12.067 00:22:12.067 Latency(us) 00:22:12.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.067 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:12.067 Verification LBA range: start 0x0 length 0x4000 00:22:12.067 NVMe0n1 : 10.00 5972.49 23.33 0.00 0.00 21384.59 1601.16 3019898.88 00:22:12.067 =================================================================================================================== 00:22:12.067 Total : 5972.49 23.33 0.00 0.00 21384.59 1601.16 3019898.88 00:22:12.067 0 00:22:12.067 09:02:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=90349 00:22:12.067 09:02:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:12.067 09:02:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:12.325 Running I/O for 10 seconds... 00:22:13.258 09:02:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.518 [2024-05-15 09:02:29.501910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.501972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.501984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.501993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.502002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.502010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.502018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.502027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.502035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.502044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.502052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.502060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.502068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.502078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.502086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7780 is same with the state(5) to be set 00:22:13.518 [2024-05-15 09:02:29.503745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.518 [2024-05-15 09:02:29.503811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.518 [2024-05-15 09:02:29.503844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.518 [2024-05-15 09:02:29.503861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.518 [2024-05-15 09:02:29.503879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.518 [2024-05-15 09:02:29.503895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.518 [2024-05-15 09:02:29.503913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.518 [2024-05-15 09:02:29.503928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.518 [2024-05-15 09:02:29.503945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.518 [2024-05-15 09:02:29.503960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.518 [2024-05-15 09:02:29.503977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.518 [2024-05-15 09:02:29.503993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.518 [2024-05-15 09:02:29.504011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.518 [2024-05-15 09:02:29.504025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.518 [2024-05-15 09:02:29.504042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.518 [2024-05-15 09:02:29.504057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.504974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.504990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.519 [2024-05-15 09:02:29.505394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.519 [2024-05-15 09:02:29.505411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.505980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.505998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.506014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.506046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.506078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.506109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.520 [2024-05-15 09:02:29.506140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.520 [2024-05-15 09:02:29.506200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80232 len:8 PRP1 0x0 PRP2 0x0 00:22:13.520 [2024-05-15 09:02:29.506215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.520 [2024-05-15 09:02:29.506248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.520 [2024-05-15 09:02:29.506261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80240 len:8 PRP1 0x0 PRP2 0x0 00:22:13.520 [2024-05-15 09:02:29.506275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.520 [2024-05-15 09:02:29.506301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.520 [2024-05-15 09:02:29.506315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80248 len:8 PRP1 0x0 PRP2 0x0 00:22:13.520 [2024-05-15 09:02:29.506329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.520 [2024-05-15 09:02:29.506356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.520 [2024-05-15 09:02:29.506369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80256 len:8 PRP1 0x0 PRP2 0x0 00:22:13.520 [2024-05-15 09:02:29.506383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.520 [2024-05-15 09:02:29.506409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.520 [2024-05-15 09:02:29.506421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80264 len:8 PRP1 0x0 PRP2 0x0 00:22:13.520 [2024-05-15 09:02:29.506435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.520 [2024-05-15 09:02:29.506461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.520 [2024-05-15 09:02:29.506473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80272 len:8 PRP1 0x0 PRP2 0x0 00:22:13.520 [2024-05-15 09:02:29.506488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.520 [2024-05-15 09:02:29.506514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.520 [2024-05-15 09:02:29.506526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80280 len:8 PRP1 0x0 PRP2 0x0 00:22:13.520 [2024-05-15 09:02:29.506540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.520 [2024-05-15 09:02:29.506581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.520 [2024-05-15 09:02:29.506596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80288 len:8 PRP1 0x0 PRP2 0x0 00:22:13.520 [2024-05-15 09:02:29.506611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.520 [2024-05-15 09:02:29.506637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.520 [2024-05-15 09:02:29.506649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80296 len:8 PRP1 0x0 PRP2 0x0 00:22:13.520 [2024-05-15 09:02:29.506663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.520 [2024-05-15 09:02:29.506678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.520 [2024-05-15 09:02:29.506690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.520 [2024-05-15 09:02:29.506702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80304 len:8 PRP1 0x0 PRP2 0x0 00:22:13.520 [2024-05-15 09:02:29.506716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.506730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.506742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.506755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80312 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.506769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.506784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.506795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.506808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80320 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.506822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.506836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.506848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.506860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80328 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.506874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.506888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.506900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.506912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80336 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.506926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.506941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.506952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.506965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80344 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.506979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.506993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80352 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80360 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80368 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80376 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80384 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80392 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80400 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80408 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80416 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80424 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80432 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79416 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.521 [2024-05-15 09:02:29.507942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:22:13.521 [2024-05-15 09:02:29.507957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.521 [2024-05-15 09:02:29.507972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.521 [2024-05-15 09:02:29.507983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.507995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79488 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79496 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79504 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79512 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79520 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79528 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79536 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79544 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79552 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79560 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79568 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79576 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79584 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79592 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79600 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79608 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.508960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.508972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.508984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79616 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.508998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.509012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.509024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.509036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79624 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.509050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.509064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.509076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.509088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79632 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.509102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.509117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.509128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.509141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79640 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.509155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.509169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.509180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.522 [2024-05-15 09:02:29.509195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79648 len:8 PRP1 0x0 PRP2 0x0 00:22:13.522 [2024-05-15 09:02:29.509209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.522 [2024-05-15 09:02:29.509224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.522 [2024-05-15 09:02:29.509235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.523 [2024-05-15 09:02:29.509248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79656 len:8 PRP1 0x0 PRP2 0x0 00:22:13.523 [2024-05-15 09:02:29.509262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.523 [2024-05-15 09:02:29.509320] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb3b390 was disconnected and freed. reset controller. 00:22:13.523 [2024-05-15 09:02:29.509466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.523 [2024-05-15 09:02:29.509502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.523 [2024-05-15 09:02:29.509521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.523 [2024-05-15 09:02:29.509536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.523 [2024-05-15 09:02:29.509552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.523 [2024-05-15 09:02:29.509587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.523 [2024-05-15 09:02:29.509605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.523 [2024-05-15 09:02:29.509620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.523 [2024-05-15 09:02:29.509634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabba00 is same with the state(5) to be set 00:22:13.523 [2024-05-15 09:02:29.509901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:13.523 [2024-05-15 09:02:29.509941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabba00 (9): Bad file descriptor 00:22:13.523 [2024-05-15 09:02:29.510085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.523 [2024-05-15 09:02:29.510124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabba00 with addr=10.0.0.2, port=4420 00:22:13.523 [2024-05-15 09:02:29.510142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabba00 is same with the state(5) to be set 00:22:13.523 [2024-05-15 09:02:29.510169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabba00 (9): Bad file descriptor 00:22:13.523 [2024-05-15 09:02:29.510193] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:13.523 [2024-05-15 09:02:29.510209] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:13.523 [2024-05-15 09:02:29.510224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:13.523 [2024-05-15 09:02:29.510253] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.523 [2024-05-15 09:02:29.510271] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:13.523 09:02:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:14.497 [2024-05-15 09:02:30.510428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.497 [2024-05-15 09:02:30.510509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabba00 with addr=10.0.0.2, port=4420 00:22:14.497 [2024-05-15 09:02:30.510527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabba00 is same with the state(5) to be set 00:22:14.497 [2024-05-15 09:02:30.510555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabba00 (9): Bad file descriptor 00:22:14.497 [2024-05-15 09:02:30.510587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:14.497 [2024-05-15 09:02:30.510599] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:14.497 [2024-05-15 09:02:30.510610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:14.497 [2024-05-15 09:02:30.510639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.497 [2024-05-15 09:02:30.510651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:15.431 [2024-05-15 09:02:31.510802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.431 [2024-05-15 09:02:31.510881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabba00 with addr=10.0.0.2, port=4420 00:22:15.431 [2024-05-15 09:02:31.510899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabba00 is same with the state(5) to be set 00:22:15.431 [2024-05-15 09:02:31.510926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabba00 (9): Bad file descriptor 00:22:15.431 [2024-05-15 09:02:31.510945] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:15.431 [2024-05-15 09:02:31.510956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:15.431 [2024-05-15 09:02:31.510967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:15.431 [2024-05-15 09:02:31.510995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.431 [2024-05-15 09:02:31.511007] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:16.365 [2024-05-15 09:02:32.515434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:16.365 [2024-05-15 09:02:32.515543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabba00 with addr=10.0.0.2, port=4420 00:22:16.365 [2024-05-15 09:02:32.515599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabba00 is same with the state(5) to be set 00:22:16.365 [2024-05-15 09:02:32.515909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabba00 (9): Bad file descriptor 00:22:16.365 [2024-05-15 09:02:32.516251] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.365 [2024-05-15 09:02:32.516294] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:16.365 [2024-05-15 09:02:32.516315] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.365 [2024-05-15 09:02:32.520962] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:16.365 [2024-05-15 09:02:32.521030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:16.365 09:02:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.931 [2024-05-15 09:02:32.916001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.931 09:02:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 90349 00:22:17.496 [2024-05-15 09:02:33.557707] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:22.831 00:22:22.831 Latency(us) 00:22:22.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.831 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:22.831 Verification LBA range: start 0x0 length 0x4000 00:22:22.831 NVMe0n1 : 10.01 4922.13 19.23 3079.89 0.00 15959.97 655.36 3019898.88 00:22:22.831 =================================================================================================================== 00:22:22.831 Total : 4922.13 19.23 3079.89 0.00 15959.97 0.00 3019898.88 00:22:22.831 0 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 90183 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 90183 ']' 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 90183 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90183 00:22:22.831 killing process with pid 90183 00:22:22.831 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.831 00:22:22.831 Latency(us) 00:22:22.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.831 =================================================================================================================== 00:22:22.831 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90183' 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 90183 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 90183 00:22:22.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=90470 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 90470 /var/tmp/bdevperf.sock 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 90470 ']' 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:22.831 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:22.831 [2024-05-15 09:02:38.642147] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:22:22.831 [2024-05-15 09:02:38.642243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90470 ] 00:22:22.832 [2024-05-15 09:02:38.774692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.832 [2024-05-15 09:02:38.834849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.832 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:22.832 09:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:22:22.832 09:02:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=90479 00:22:22.832 09:02:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90470 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:22.832 09:02:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:23.090 09:02:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:23.348 NVMe0n1 00:22:23.348 09:02:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=90538 00:22:23.348 09:02:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:23.348 09:02:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:23.348 Running I/O for 10 seconds... 00:22:24.283 09:02:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.544 [2024-05-15 09:02:40.732648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.732998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.544 [2024-05-15 09:02:40.733252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac9b40 is same with the state(5) to be set 00:22:24.545 [2024-05-15 09:02:40.733605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.733980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.733991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.545 [2024-05-15 09:02:40.734315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.545 [2024-05-15 09:02:40.734325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.734980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.734989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.735000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.735010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.735021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.735030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.735041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.735051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.735062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.735071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.546 [2024-05-15 09:02:40.735083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.546 [2024-05-15 09:02:40.735092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.547 [2024-05-15 09:02:40.735952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.547 [2024-05-15 09:02:40.735964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.735973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.735985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.735994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.548 [2024-05-15 09:02:40.736342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23287b0 is same with the state(5) to be set 00:22:24.548 [2024-05-15 09:02:40.736366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:24.548 [2024-05-15 09:02:40.736374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:24.548 [2024-05-15 09:02:40.736383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58976 len:8 PRP1 0x0 PRP2 0x0 00:22:24.548 [2024-05-15 09:02:40.736392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.548 [2024-05-15 09:02:40.736440] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23287b0 was disconnected and freed. reset controller. 00:22:24.548 [2024-05-15 09:02:40.736728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:24.548 [2024-05-15 09:02:40.736829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b9a00 (9): Bad file descriptor 00:22:24.548 [2024-05-15 09:02:40.736969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.548 [2024-05-15 09:02:40.736992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b9a00 with addr=10.0.0.2, port=4420 00:22:24.548 [2024-05-15 09:02:40.737003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b9a00 is same with the state(5) to be set 00:22:24.548 [2024-05-15 09:02:40.737022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b9a00 (9): Bad file descriptor 00:22:24.548 [2024-05-15 09:02:40.737048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:24.548 [2024-05-15 09:02:40.737066] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:24.548 [2024-05-15 09:02:40.737083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:24.548 [2024-05-15 09:02:40.737112] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:24.548 [2024-05-15 09:02:40.737124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:24.548 09:02:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 90538 00:22:27.076 [2024-05-15 09:02:42.737311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.076 [2024-05-15 09:02:42.737382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b9a00 with addr=10.0.0.2, port=4420 00:22:27.076 [2024-05-15 09:02:42.737401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b9a00 is same with the state(5) to be set 00:22:27.076 [2024-05-15 09:02:42.737427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b9a00 (9): Bad file descriptor 00:22:27.076 [2024-05-15 09:02:42.737461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:27.076 [2024-05-15 09:02:42.737474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:27.076 [2024-05-15 09:02:42.737485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.076 [2024-05-15 09:02:42.737513] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:27.076 [2024-05-15 09:02:42.737524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:28.975 [2024-05-15 09:02:44.737783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.975 [2024-05-15 09:02:44.737860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b9a00 with addr=10.0.0.2, port=4420 00:22:28.975 [2024-05-15 09:02:44.737877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b9a00 is same with the state(5) to be set 00:22:28.975 [2024-05-15 09:02:44.737905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b9a00 (9): Bad file descriptor 00:22:28.975 [2024-05-15 09:02:44.737924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:28.975 [2024-05-15 09:02:44.737933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:28.975 [2024-05-15 09:02:44.737945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:28.975 [2024-05-15 09:02:44.737975] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:28.975 [2024-05-15 09:02:44.737986] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.879 [2024-05-15 09:02:46.738139] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.816 00:22:31.816 Latency(us) 00:22:31.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.816 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:31.816 NVMe0n1 : 8.16 2395.94 9.36 15.68 0.00 52997.54 2546.97 7015926.69 00:22:31.817 =================================================================================================================== 00:22:31.817 Total : 2395.94 9.36 15.68 0.00 52997.54 2546.97 7015926.69 00:22:31.817 0 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.817 Attaching 5 probes... 00:22:31.817 1276.875250: reset bdev controller NVMe0 00:22:31.817 1277.048507: reconnect bdev controller NVMe0 00:22:31.817 3277.313430: reconnect delay bdev controller NVMe0 00:22:31.817 3277.336723: reconnect bdev controller NVMe0 00:22:31.817 5277.782223: reconnect delay bdev controller NVMe0 00:22:31.817 5277.809323: reconnect bdev controller NVMe0 00:22:31.817 7278.235085: reconnect delay bdev controller NVMe0 00:22:31.817 7278.259273: reconnect bdev controller NVMe0 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 90479 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 90470 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 90470 ']' 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 90470 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90470 00:22:31.817 killing process with pid 90470 00:22:31.817 Received shutdown signal, test time was about 8.213018 seconds 00:22:31.817 00:22:31.817 Latency(us) 00:22:31.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.817 =================================================================================================================== 00:22:31.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90470' 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 90470 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 90470 00:22:31.817 09:02:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.075 09:02:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:32.075 09:02:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:32.075 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:32.075 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:22:32.075 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.075 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:22:32.075 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.075 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.423 rmmod nvme_tcp 00:22:32.423 rmmod nvme_fabrics 00:22:32.423 rmmod nvme_keyring 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 89901 ']' 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 89901 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 89901 ']' 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 89901 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89901 00:22:32.423 killing process with pid 89901 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89901' 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 89901 00:22:32.423 [2024-05-15 09:02:48.389300] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 89901 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:32.423 ************************************ 00:22:32.423 END TEST nvmf_timeout 00:22:32.423 ************************************ 00:22:32.423 00:22:32.423 real 0m46.029s 00:22:32.423 user 2m15.871s 00:22:32.423 sys 0m4.894s 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:32.423 09:02:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:32.683 09:02:48 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ virt == phy ]] 00:22:32.683 09:02:48 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:22:32.683 09:02:48 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.683 09:02:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:32.683 09:02:48 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:22:32.683 00:22:32.683 real 12m33.807s 00:22:32.683 user 34m46.811s 00:22:32.683 sys 2m52.321s 00:22:32.683 ************************************ 00:22:32.684 END TEST nvmf_tcp 00:22:32.684 ************************************ 00:22:32.684 09:02:48 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:32.684 09:02:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:32.684 09:02:48 -- spdk/autotest.sh@12 -- # hostname 00:22:32.684 09:02:48 -- spdk/autotest.sh@12 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/nvmf_tcp.info 00:22:32.941 geninfo: WARNING: invalid characters removed from testname! 00:23:05.056 ### URING mentions in coverage after the test ###: 00:23:05.056 09:03:16 -- spdk/autotest.sh@13 -- # echo '### URING mentions in coverage after the test ###:' 00:23:05.056 09:03:16 -- spdk/autotest.sh@14 -- # grep -i uring 00:23:05.056 09:03:16 -- spdk/autotest.sh@14 -- # cat /home/vagrant/spdk_repo/spdk/../output/nvmf_tcp.info 00:23:05.056 09:03:17 -- spdk/autotest.sh@15 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_tcp.info 00:23:05.056 09:03:17 -- spdk/autotest.sh@297 -- # [[ 0 -eq 0 ]] 00:23:05.056 09:03:17 -- spdk/autotest.sh@298 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:23:05.056 09:03:17 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:05.056 09:03:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:05.056 09:03:17 -- common/autotest_common.sh@10 -- # set +x 00:23:05.056 ************************************ 00:23:05.056 START TEST spdkcli_nvmf_tcp 00:23:05.056 ************************************ 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:23:05.056 * Looking for test storage... 00:23:05.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:23:05.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=91350 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 91350 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 91350 ']' 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:05.056 09:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.057 [2024-05-15 09:03:17.188268] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:23:05.057 [2024-05-15 09:03:17.188397] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91350 ] 00:23:05.057 [2024-05-15 09:03:17.328671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:05.057 [2024-05-15 09:03:17.390450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.057 [2024-05-15 09:03:17.390459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.057 09:03:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:05.057 09:03:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:23:05.057 09:03:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:23:05.057 09:03:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.057 09:03:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.057 09:03:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:23:05.057 09:03:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:23:05.057 09:03:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:23:05.057 09:03:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:05.057 09:03:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.057 09:03:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:05.057 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:05.057 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:23:05.057 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:23:05.057 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:23:05.057 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:23:05.057 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:23:05.057 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:05.057 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:05.057 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:23:05.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:23:05.057 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:23:05.057 ' 00:23:05.057 [2024-05-15 09:03:20.918226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.991 [2024-05-15 09:03:22.187068] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:05.991 [2024-05-15 09:03:22.187633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:23:08.518 [2024-05-15 09:03:24.589188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:23:11.046 [2024-05-15 09:03:26.678962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:23:12.420 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:23:12.420 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:23:12.420 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:23:12.420 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:23:12.420 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:23:12.420 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:23:12.420 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:23:12.420 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:12.420 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:12.420 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:23:12.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:23:12.420 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:23:12.420 09:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:23:12.420 09:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.420 09:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.420 09:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:23:12.420 09:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:12.420 09:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.420 09:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:23:12.420 09:03:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:23:12.679 09:03:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:23:12.937 09:03:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:23:12.937 09:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:23:12.937 09:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.937 09:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.937 09:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:23:12.937 09:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:12.937 09:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.937 09:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:23:12.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:23:12.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:12.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:23:12.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:23:12.937 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:23:12.937 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:23:12.937 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:12.937 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:23:12.937 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:23:12.937 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:23:12.937 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:23:12.937 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:23:12.937 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:23:12.937 ' 00:23:18.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:23:18.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:23:18.202 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:18.202 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:23:18.202 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:23:18.202 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:23:18.202 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:23:18.202 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:18.202 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:23:18.202 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:23:18.202 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:23:18.202 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:23:18.202 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:23:18.202 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 91350 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 91350 ']' 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 91350 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91350 00:23:18.460 killing process with pid 91350 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91350' 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 91350 00:23:18.460 [2024-05-15 09:03:34.576826] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:18.460 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 91350 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 91350 ']' 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 91350 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 91350 ']' 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 91350 00:23:18.719 Process with pid 91350 is not found 00:23:18.719 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (91350) - No such process 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 91350 is not found' 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:18.719 ************************************ 00:23:18.719 END TEST spdkcli_nvmf_tcp 00:23:18.719 ************************************ 00:23:18.719 00:23:18.719 real 0m17.748s 00:23:18.719 user 0m38.498s 00:23:18.719 sys 0m0.918s 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:18.719 09:03:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.719 09:03:34 -- spdk/autotest.sh@299 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:18.719 09:03:34 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:18.719 09:03:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:18.719 09:03:34 -- common/autotest_common.sh@10 -- # set +x 00:23:18.719 ************************************ 00:23:18.719 START TEST nvmf_identify_passthru 00:23:18.719 ************************************ 00:23:18.719 09:03:34 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:18.719 * Looking for test storage... 00:23:18.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:18.719 09:03:34 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.719 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:18.719 09:03:34 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.719 09:03:34 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.719 09:03:34 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.719 09:03:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.719 09:03:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.719 09:03:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.719 09:03:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:18.720 09:03:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.720 09:03:34 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:18.720 09:03:34 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.720 09:03:34 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.720 09:03:34 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.720 09:03:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.720 09:03:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.720 09:03:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.720 09:03:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:18.720 09:03:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.720 09:03:34 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.720 09:03:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:18.720 09:03:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:18.720 Cannot find device "nvmf_tgt_br" 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:18.720 Cannot find device "nvmf_tgt_br2" 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:18.720 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:18.978 Cannot find device "nvmf_tgt_br" 00:23:18.978 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:23:18.978 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:18.978 Cannot find device "nvmf_tgt_br2" 00:23:18.978 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:23:18.978 09:03:34 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:18.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:18.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:18.978 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:18.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:23:18.979 00:23:18.979 --- 10.0.0.2 ping statistics --- 00:23:18.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.979 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:18.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:18.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:23:18.979 00:23:18.979 --- 10.0.0.3 ping statistics --- 00:23:18.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.979 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:18.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:23:18.979 00:23:18.979 --- 10.0.0.1 ping statistics --- 00:23:18.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.979 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:18.979 09:03:35 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.236 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:19.236 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:19.236 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:23:19.236 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:23:19.236 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:23:19.236 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:19.236 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:23:19.237 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:23:19.237 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:23:19.237 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:19.237 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:23:19.237 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:23:19.493 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:23:19.493 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:23:19.493 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.493 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:19.493 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:23:19.493 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:19.493 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:19.493 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=91850 00:23:19.493 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:19.493 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:19.493 09:03:35 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 91850 00:23:19.493 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 91850 ']' 00:23:19.493 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.493 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:19.493 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.493 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:19.493 09:03:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:19.750 [2024-05-15 09:03:35.749378] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:23:19.750 [2024-05-15 09:03:35.749496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.750 [2024-05-15 09:03:35.894977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:19.750 [2024-05-15 09:03:35.962446] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.750 [2024-05-15 09:03:35.962499] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.750 [2024-05-15 09:03:35.962510] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.750 [2024-05-15 09:03:35.962518] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.750 [2024-05-15 09:03:35.962526] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.750 [2024-05-15 09:03:35.962621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.750 [2024-05-15 09:03:35.963259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.750 [2024-05-15 09:03:35.963307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.750 [2024-05-15 09:03:35.963314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:23:20.008 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.008 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:20.008 [2024-05-15 09:03:36.184066] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.008 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:20.008 [2024-05-15 09:03:36.195324] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.008 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.008 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:20.266 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:20.266 Nvme0n1 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.266 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.266 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.266 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:20.266 [2024-05-15 09:03:36.322997] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:20.266 [2024-05-15 09:03:36.323473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.266 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:20.266 [ 00:23:20.266 { 00:23:20.266 "allow_any_host": true, 00:23:20.266 "hosts": [], 00:23:20.266 "listen_addresses": [], 00:23:20.266 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:20.266 "subtype": "Discovery" 00:23:20.266 }, 00:23:20.266 { 00:23:20.266 "allow_any_host": true, 00:23:20.266 "hosts": [], 00:23:20.266 "listen_addresses": [ 00:23:20.266 { 00:23:20.266 "adrfam": "IPv4", 00:23:20.266 "traddr": "10.0.0.2", 00:23:20.266 "trsvcid": "4420", 00:23:20.266 "trtype": "TCP" 00:23:20.266 } 00:23:20.266 ], 00:23:20.266 "max_cntlid": 65519, 00:23:20.266 "max_namespaces": 1, 00:23:20.266 "min_cntlid": 1, 00:23:20.266 "model_number": "SPDK bdev Controller", 00:23:20.266 "namespaces": [ 00:23:20.266 { 00:23:20.266 "bdev_name": "Nvme0n1", 00:23:20.266 "name": "Nvme0n1", 00:23:20.266 "nguid": "2631DCB88B514649A810F840971C943B", 00:23:20.266 "nsid": 1, 00:23:20.266 "uuid": "2631dcb8-8b51-4649-a810-f840971c943b" 00:23:20.266 } 00:23:20.266 ], 00:23:20.266 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.266 "serial_number": "SPDK00000000000001", 00:23:20.266 "subtype": "NVMe" 00:23:20.266 } 00:23:20.266 ] 00:23:20.266 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.266 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:20.266 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:23:20.266 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:23:20.523 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:23:20.523 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:20.523 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:23:20.523 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:23:20.780 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:23:20.780 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:23:20.780 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:23:20.780 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.780 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:23:20.780 09:03:36 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:23:20.780 09:03:36 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:20.780 09:03:36 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:23:20.780 09:03:36 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:20.780 09:03:36 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:23:20.780 09:03:36 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:20.780 09:03:36 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:20.780 rmmod nvme_tcp 00:23:20.780 rmmod nvme_fabrics 00:23:20.780 rmmod nvme_keyring 00:23:20.780 09:03:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.780 09:03:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:23:20.780 09:03:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:23:20.780 09:03:36 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 91850 ']' 00:23:20.780 09:03:36 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 91850 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 91850 ']' 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 91850 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91850 00:23:20.780 killing process with pid 91850 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91850' 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 91850 00:23:20.780 [2024-05-15 09:03:36.927321] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:20.780 09:03:36 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 91850 00:23:21.038 09:03:37 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:21.038 09:03:37 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:21.038 09:03:37 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:21.038 09:03:37 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:21.038 09:03:37 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:21.038 09:03:37 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.038 09:03:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:21.038 09:03:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.038 09:03:37 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:21.038 ************************************ 00:23:21.038 END TEST nvmf_identify_passthru 00:23:21.038 ************************************ 00:23:21.038 00:23:21.038 real 0m2.341s 00:23:21.038 user 0m4.905s 00:23:21.038 sys 0m0.702s 00:23:21.038 09:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:21.038 09:03:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:21.038 09:03:37 -- spdk/autotest.sh@301 -- # run_test_wrapper nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:21.038 09:03:37 -- spdk/autotest.sh@10 -- # local test_name=nvmf_dif 00:23:21.038 09:03:37 -- spdk/autotest.sh@11 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:21.038 09:03:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:21.038 09:03:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:21.038 09:03:37 -- common/autotest_common.sh@10 -- # set +x 00:23:21.038 ************************************ 00:23:21.038 START TEST nvmf_dif 00:23:21.038 ************************************ 00:23:21.038 09:03:37 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:21.038 * Looking for test storage... 00:23:21.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:21.038 09:03:37 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:21.038 09:03:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:21.295 09:03:37 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.295 09:03:37 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.295 09:03:37 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.295 09:03:37 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.295 09:03:37 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.295 09:03:37 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.295 09:03:37 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:21.295 09:03:37 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.295 09:03:37 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:21.295 09:03:37 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:21.295 09:03:37 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:21.295 09:03:37 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:21.295 09:03:37 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.295 09:03:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:21.295 09:03:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:21.295 Cannot find device "nvmf_tgt_br" 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@155 -- # true 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:21.295 Cannot find device "nvmf_tgt_br2" 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@156 -- # true 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:21.295 Cannot find device "nvmf_tgt_br" 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@158 -- # true 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:21.295 Cannot find device "nvmf_tgt_br2" 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@159 -- # true 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:21.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:21.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:21.295 09:03:37 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:21.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:23:21.551 00:23:21.551 --- 10.0.0.2 ping statistics --- 00:23:21.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.551 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:21.551 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:21.551 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:23:21.551 00:23:21.551 --- 10.0.0.3 ping statistics --- 00:23:21.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.551 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:21.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:23:21.551 00:23:21.551 --- 10.0.0.1 ping statistics --- 00:23:21.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.551 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:21.551 09:03:37 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:21.808 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:21.809 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:21.809 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:21.809 09:03:37 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.809 09:03:37 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:21.809 09:03:37 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:21.809 09:03:37 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.809 09:03:37 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:21.809 09:03:37 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:21.809 09:03:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:21.809 09:03:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:21.809 09:03:37 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.809 09:03:37 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:21.809 09:03:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:21.809 09:03:37 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=92174 00:23:21.809 09:03:37 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:21.809 09:03:37 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 92174 00:23:21.809 09:03:37 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 92174 ']' 00:23:21.809 09:03:37 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.809 09:03:37 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:21.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.809 09:03:37 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.809 09:03:37 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:21.809 09:03:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:21.809 [2024-05-15 09:03:38.014765] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:23:21.809 [2024-05-15 09:03:38.014871] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.066 [2024-05-15 09:03:38.151431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.066 [2024-05-15 09:03:38.210083] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.066 [2024-05-15 09:03:38.210367] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.066 [2024-05-15 09:03:38.210448] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.066 [2024-05-15 09:03:38.210526] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.066 [2024-05-15 09:03:38.210541] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.066 [2024-05-15 09:03:38.210591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.066 09:03:38 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:22.066 09:03:38 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:23:22.066 09:03:38 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:22.066 09:03:38 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.066 09:03:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:22.324 09:03:38 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.324 09:03:38 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:22.324 09:03:38 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:22.324 09:03:38 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.324 09:03:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:22.324 [2024-05-15 09:03:38.330181] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.324 09:03:38 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.324 09:03:38 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:22.324 09:03:38 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:22.324 09:03:38 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:22.324 09:03:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:22.324 ************************************ 00:23:22.324 START TEST fio_dif_1_default 00:23:22.324 ************************************ 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:22.324 bdev_null0 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:22.324 [2024-05-15 09:03:38.374111] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:22.324 [2024-05-15 09:03:38.374724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:22.324 09:03:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.324 { 00:23:22.324 "params": { 00:23:22.324 "name": "Nvme$subsystem", 00:23:22.324 "trtype": "$TEST_TRANSPORT", 00:23:22.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.324 "adrfam": "ipv4", 00:23:22.324 "trsvcid": "$NVMF_PORT", 00:23:22.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.324 "hdgst": ${hdgst:-false}, 00:23:22.324 "ddgst": ${ddgst:-false} 00:23:22.324 }, 00:23:22.324 "method": "bdev_nvme_attach_controller" 00:23:22.324 } 00:23:22.325 EOF 00:23:22.325 )") 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:22.325 "params": { 00:23:22.325 "name": "Nvme0", 00:23:22.325 "trtype": "tcp", 00:23:22.325 "traddr": "10.0.0.2", 00:23:22.325 "adrfam": "ipv4", 00:23:22.325 "trsvcid": "4420", 00:23:22.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:22.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:22.325 "hdgst": false, 00:23:22.325 "ddgst": false 00:23:22.325 }, 00:23:22.325 "method": "bdev_nvme_attach_controller" 00:23:22.325 }' 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:22.325 09:03:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:22.584 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:22.584 fio-3.35 00:23:22.584 Starting 1 thread 00:23:34.792 00:23:34.792 filename0: (groupid=0, jobs=1): err= 0: pid=92245: Wed May 15 09:03:49 2024 00:23:34.792 read: IOPS=1827, BW=7308KiB/s (7483kB/s)(71.4MiB/10001msec) 00:23:34.792 slat (nsec): min=7825, max=66679, avg=9090.21, stdev=3173.30 00:23:34.792 clat (usec): min=448, max=42032, avg=2162.00, stdev=8039.76 00:23:34.792 lat (usec): min=456, max=42042, avg=2171.09, stdev=8039.83 00:23:34.792 clat percentiles (usec): 00:23:34.792 | 1.00th=[ 453], 5.00th=[ 461], 10.00th=[ 465], 20.00th=[ 474], 00:23:34.792 | 30.00th=[ 478], 40.00th=[ 482], 50.00th=[ 486], 60.00th=[ 494], 00:23:34.792 | 70.00th=[ 498], 80.00th=[ 510], 90.00th=[ 570], 95.00th=[ 660], 00:23:34.792 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:23:34.792 | 99.99th=[42206] 00:23:34.792 bw ( KiB/s): min= 4736, max=18176, per=100.00%, avg=7419.53, stdev=3207.01, samples=19 00:23:34.792 iops : min= 1184, max= 4544, avg=1854.84, stdev=801.78, samples=19 00:23:34.792 lat (usec) : 500=71.68%, 750=24.18% 00:23:34.792 lat (msec) : 10=0.02%, 50=4.12% 00:23:34.792 cpu : usr=90.64%, sys=8.31%, ctx=100, majf=0, minf=9 00:23:34.792 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.792 issued rwts: total=18272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.792 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:34.792 00:23:34.792 Run status group 0 (all jobs): 00:23:34.792 READ: bw=7308KiB/s (7483kB/s), 7308KiB/s-7308KiB/s (7483kB/s-7483kB/s), io=71.4MiB (74.8MB), run=10001-10001msec 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:34.792 ************************************ 00:23:34.792 END TEST fio_dif_1_default 00:23:34.792 ************************************ 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.792 00:23:34.792 real 0m10.913s 00:23:34.792 user 0m9.680s 00:23:34.792 sys 0m1.032s 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:34.792 09:03:49 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:34.792 09:03:49 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:34.792 09:03:49 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:34.792 09:03:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:34.792 ************************************ 00:23:34.792 START TEST fio_dif_1_multi_subsystems 00:23:34.792 ************************************ 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.792 bdev_null0 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.792 [2024-05-15 09:03:49.340111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.792 bdev_null1 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.792 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:34.793 { 00:23:34.793 "params": { 00:23:34.793 "name": "Nvme$subsystem", 00:23:34.793 "trtype": "$TEST_TRANSPORT", 00:23:34.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.793 "adrfam": "ipv4", 00:23:34.793 "trsvcid": "$NVMF_PORT", 00:23:34.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.793 "hdgst": ${hdgst:-false}, 00:23:34.793 "ddgst": ${ddgst:-false} 00:23:34.793 }, 00:23:34.793 "method": "bdev_nvme_attach_controller" 00:23:34.793 } 00:23:34.793 EOF 00:23:34.793 )") 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:34.793 { 00:23:34.793 "params": { 00:23:34.793 "name": "Nvme$subsystem", 00:23:34.793 "trtype": "$TEST_TRANSPORT", 00:23:34.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.793 "adrfam": "ipv4", 00:23:34.793 "trsvcid": "$NVMF_PORT", 00:23:34.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.793 "hdgst": ${hdgst:-false}, 00:23:34.793 "ddgst": ${ddgst:-false} 00:23:34.793 }, 00:23:34.793 "method": "bdev_nvme_attach_controller" 00:23:34.793 } 00:23:34.793 EOF 00:23:34.793 )") 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:34.793 "params": { 00:23:34.793 "name": "Nvme0", 00:23:34.793 "trtype": "tcp", 00:23:34.793 "traddr": "10.0.0.2", 00:23:34.793 "adrfam": "ipv4", 00:23:34.793 "trsvcid": "4420", 00:23:34.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:34.793 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:34.793 "hdgst": false, 00:23:34.793 "ddgst": false 00:23:34.793 }, 00:23:34.793 "method": "bdev_nvme_attach_controller" 00:23:34.793 },{ 00:23:34.793 "params": { 00:23:34.793 "name": "Nvme1", 00:23:34.793 "trtype": "tcp", 00:23:34.793 "traddr": "10.0.0.2", 00:23:34.793 "adrfam": "ipv4", 00:23:34.793 "trsvcid": "4420", 00:23:34.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.793 "hdgst": false, 00:23:34.793 "ddgst": false 00:23:34.793 }, 00:23:34.793 "method": "bdev_nvme_attach_controller" 00:23:34.793 }' 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:34.793 09:03:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.793 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:34.793 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:34.793 fio-3.35 00:23:34.793 Starting 2 threads 00:23:44.786 00:23:44.786 filename0: (groupid=0, jobs=1): err= 0: pid=92405: Wed May 15 09:04:00 2024 00:23:44.786 read: IOPS=526, BW=2107KiB/s (2158kB/s)(20.6MiB/10030msec) 00:23:44.786 slat (nsec): min=7905, max=75450, avg=10490.64, stdev=5652.27 00:23:44.786 clat (usec): min=465, max=42877, avg=7559.84, stdev=15077.42 00:23:44.786 lat (usec): min=473, max=42907, avg=7570.33, stdev=15078.43 00:23:44.786 clat percentiles (usec): 00:23:44.786 | 1.00th=[ 490], 5.00th=[ 529], 10.00th=[ 570], 20.00th=[ 619], 00:23:44.786 | 30.00th=[ 644], 40.00th=[ 660], 50.00th=[ 701], 60.00th=[ 1037], 00:23:44.786 | 70.00th=[ 1156], 80.00th=[ 1369], 90.00th=[41157], 95.00th=[41157], 00:23:44.786 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:23:44.786 | 99.99th=[42730] 00:23:44.786 bw ( KiB/s): min= 480, max= 8096, per=51.09%, avg=2112.00, stdev=2235.95, samples=20 00:23:44.786 iops : min= 120, max= 2024, avg=528.00, stdev=558.99, samples=20 00:23:44.786 lat (usec) : 500=1.78%, 750=51.97%, 1000=5.83% 00:23:44.786 lat (msec) : 2=23.54%, 4=0.15%, 50=16.73% 00:23:44.786 cpu : usr=94.65%, sys=4.67%, ctx=20, majf=0, minf=9 00:23:44.786 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.786 issued rwts: total=5284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.786 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:44.786 filename1: (groupid=0, jobs=1): err= 0: pid=92406: Wed May 15 09:04:00 2024 00:23:44.786 read: IOPS=506, BW=2028KiB/s (2077kB/s)(19.9MiB/10036msec) 00:23:44.786 slat (nsec): min=5976, max=63159, avg=10199.15, stdev=5060.17 00:23:44.786 clat (usec): min=461, max=41999, avg=7857.99, stdev=15319.32 00:23:44.786 lat (usec): min=469, max=42031, avg=7868.19, stdev=15320.03 00:23:44.786 clat percentiles (usec): 00:23:44.786 | 1.00th=[ 482], 5.00th=[ 515], 10.00th=[ 570], 20.00th=[ 611], 00:23:44.786 | 30.00th=[ 635], 40.00th=[ 660], 50.00th=[ 840], 60.00th=[ 1074], 00:23:44.786 | 70.00th=[ 1156], 80.00th=[ 1401], 90.00th=[41157], 95.00th=[41157], 00:23:44.786 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:23:44.786 | 99.99th=[42206] 00:23:44.786 bw ( KiB/s): min= 544, max= 8576, per=49.18%, avg=2033.60, stdev=1932.94, samples=20 00:23:44.786 iops : min= 136, max= 2144, avg=508.40, stdev=483.23, samples=20 00:23:44.786 lat (usec) : 500=3.26%, 750=43.38%, 1000=11.20% 00:23:44.786 lat (msec) : 2=24.63%, 4=0.08%, 50=17.45% 00:23:44.786 cpu : usr=94.80%, sys=4.56%, ctx=66, majf=0, minf=0 00:23:44.786 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.786 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.786 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:44.786 00:23:44.786 Run status group 0 (all jobs): 00:23:44.786 READ: bw=4134KiB/s (4233kB/s), 2028KiB/s-2107KiB/s (2077kB/s-2158kB/s), io=40.5MiB (42.5MB), run=10030-10036msec 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.786 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:44.787 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.787 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.787 ************************************ 00:23:44.787 END TEST fio_dif_1_multi_subsystems 00:23:44.787 ************************************ 00:23:44.787 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.787 00:23:44.787 real 0m11.128s 00:23:44.787 user 0m19.766s 00:23:44.787 sys 0m1.164s 00:23:44.787 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:44.787 09:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.787 09:04:00 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:44.787 09:04:00 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:44.787 09:04:00 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:44.787 09:04:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:44.787 ************************************ 00:23:44.787 START TEST fio_dif_rand_params 00:23:44.787 ************************************ 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.787 bdev_null0 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.787 [2024-05-15 09:04:00.512374] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.787 { 00:23:44.787 "params": { 00:23:44.787 "name": "Nvme$subsystem", 00:23:44.787 "trtype": "$TEST_TRANSPORT", 00:23:44.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.787 "adrfam": "ipv4", 00:23:44.787 "trsvcid": "$NVMF_PORT", 00:23:44.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.787 "hdgst": ${hdgst:-false}, 00:23:44.787 "ddgst": ${ddgst:-false} 00:23:44.787 }, 00:23:44.787 "method": "bdev_nvme_attach_controller" 00:23:44.787 } 00:23:44.787 EOF 00:23:44.787 )") 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:44.787 "params": { 00:23:44.787 "name": "Nvme0", 00:23:44.787 "trtype": "tcp", 00:23:44.787 "traddr": "10.0.0.2", 00:23:44.787 "adrfam": "ipv4", 00:23:44.787 "trsvcid": "4420", 00:23:44.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:44.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:44.787 "hdgst": false, 00:23:44.787 "ddgst": false 00:23:44.787 }, 00:23:44.787 "method": "bdev_nvme_attach_controller" 00:23:44.787 }' 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:44.787 09:04:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.787 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:44.787 ... 00:23:44.787 fio-3.35 00:23:44.787 Starting 3 threads 00:23:50.053 00:23:50.053 filename0: (groupid=0, jobs=1): err= 0: pid=92563: Wed May 15 09:04:06 2024 00:23:50.053 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(130MiB/5005msec) 00:23:50.053 slat (nsec): min=8012, max=75944, avg=15739.25, stdev=7175.17 00:23:50.053 clat (usec): min=4944, max=56497, avg=14449.39, stdev=7431.11 00:23:50.053 lat (usec): min=4953, max=56522, avg=14465.13, stdev=7431.62 00:23:50.053 clat percentiles (usec): 00:23:50.053 | 1.00th=[ 7504], 5.00th=[10421], 10.00th=[11469], 20.00th=[12125], 00:23:50.053 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13304], 60.00th=[13566], 00:23:50.053 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15664], 95.00th=[17171], 00:23:50.053 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:23:50.053 | 99.99th=[56361] 00:23:50.053 bw ( KiB/s): min=21504, max=30720, per=30.97%, avg=26496.00, stdev=3168.27, samples=10 00:23:50.053 iops : min= 168, max= 240, avg=207.00, stdev=24.75, samples=10 00:23:50.053 lat (msec) : 10=4.92%, 20=91.80%, 50=0.19%, 100=3.09% 00:23:50.053 cpu : usr=92.27%, sys=6.10%, ctx=10, majf=0, minf=0 00:23:50.053 IO depths : 1=6.9%, 2=93.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:50.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.053 issued rwts: total=1037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:50.053 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:50.053 filename0: (groupid=0, jobs=1): err= 0: pid=92564: Wed May 15 09:04:06 2024 00:23:50.053 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5004msec) 00:23:50.053 slat (nsec): min=4592, max=56141, avg=15265.54, stdev=6259.37 00:23:50.053 clat (usec): min=6375, max=53160, avg=11585.80, stdev=4249.67 00:23:50.053 lat (usec): min=6390, max=53171, avg=11601.07, stdev=4250.63 00:23:50.053 clat percentiles (usec): 00:23:50.053 | 1.00th=[ 7242], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[10159], 00:23:50.053 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:23:50.053 | 70.00th=[11731], 80.00th=[12125], 90.00th=[13304], 95.00th=[14877], 00:23:50.053 | 99.00th=[18744], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:23:50.053 | 99.99th=[53216] 00:23:50.053 bw ( KiB/s): min=23808, max=37632, per=38.66%, avg=33075.20, stdev=4214.81, samples=10 00:23:50.053 iops : min= 186, max= 294, avg=258.40, stdev=32.93, samples=10 00:23:50.053 lat (msec) : 10=16.55%, 20=82.52%, 50=0.23%, 100=0.70% 00:23:50.053 cpu : usr=91.31%, sys=6.88%, ctx=11, majf=0, minf=0 00:23:50.053 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:50.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.053 issued rwts: total=1293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:50.053 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:50.053 filename0: (groupid=0, jobs=1): err= 0: pid=92565: Wed May 15 09:04:06 2024 00:23:50.053 read: IOPS=202, BW=25.4MiB/s (26.6MB/s)(127MiB/5004msec) 00:23:50.053 slat (nsec): min=7989, max=45620, avg=16096.27, stdev=4814.54 00:23:50.053 clat (usec): min=4139, max=23493, avg=14766.87, stdev=3012.69 00:23:50.053 lat (usec): min=4151, max=23508, avg=14782.97, stdev=3013.30 00:23:50.053 clat percentiles (usec): 00:23:50.053 | 1.00th=[ 4555], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[12911], 00:23:50.053 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15533], 60.00th=[15795], 00:23:50.053 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17433], 95.00th=[18744], 00:23:50.053 | 99.00th=[21103], 99.50th=[21627], 99.90th=[21890], 99.95th=[23462], 00:23:50.053 | 99.99th=[23462] 00:23:50.053 bw ( KiB/s): min=22272, max=32064, per=30.29%, avg=25913.60, stdev=2984.91, samples=10 00:23:50.053 iops : min= 174, max= 250, avg=202.40, stdev=23.21, samples=10 00:23:50.053 lat (msec) : 10=10.84%, 20=86.60%, 50=2.56% 00:23:50.053 cpu : usr=92.56%, sys=5.94%, ctx=8, majf=0, minf=0 00:23:50.053 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:50.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.053 issued rwts: total=1015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:50.053 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:50.053 00:23:50.053 Run status group 0 (all jobs): 00:23:50.053 READ: bw=83.5MiB/s (87.6MB/s), 25.4MiB/s-32.3MiB/s (26.6MB/s-33.9MB/s), io=418MiB (438MB), run=5004-5005msec 00:23:50.312 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:50.312 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:50.312 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:50.312 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:50.312 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 bdev_null0 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 [2024-05-15 09:04:06.474904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 bdev_null1 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 bdev_null2 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.313 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.571 { 00:23:50.571 "params": { 00:23:50.571 "name": "Nvme$subsystem", 00:23:50.571 "trtype": "$TEST_TRANSPORT", 00:23:50.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.571 "adrfam": "ipv4", 00:23:50.571 "trsvcid": "$NVMF_PORT", 00:23:50.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.571 "hdgst": ${hdgst:-false}, 00:23:50.571 "ddgst": ${ddgst:-false} 00:23:50.571 }, 00:23:50.571 "method": "bdev_nvme_attach_controller" 00:23:50.571 } 00:23:50.571 EOF 00:23:50.571 )") 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.571 { 00:23:50.571 "params": { 00:23:50.571 "name": "Nvme$subsystem", 00:23:50.571 "trtype": "$TEST_TRANSPORT", 00:23:50.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.571 "adrfam": "ipv4", 00:23:50.571 "trsvcid": "$NVMF_PORT", 00:23:50.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.571 "hdgst": ${hdgst:-false}, 00:23:50.571 "ddgst": ${ddgst:-false} 00:23:50.571 }, 00:23:50.571 "method": "bdev_nvme_attach_controller" 00:23:50.571 } 00:23:50.571 EOF 00:23:50.571 )") 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.571 { 00:23:50.571 "params": { 00:23:50.571 "name": "Nvme$subsystem", 00:23:50.571 "trtype": "$TEST_TRANSPORT", 00:23:50.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.571 "adrfam": "ipv4", 00:23:50.571 "trsvcid": "$NVMF_PORT", 00:23:50.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.571 "hdgst": ${hdgst:-false}, 00:23:50.571 "ddgst": ${ddgst:-false} 00:23:50.571 }, 00:23:50.571 "method": "bdev_nvme_attach_controller" 00:23:50.571 } 00:23:50.571 EOF 00:23:50.571 )") 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:50.571 09:04:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:50.571 "params": { 00:23:50.571 "name": "Nvme0", 00:23:50.572 "trtype": "tcp", 00:23:50.572 "traddr": "10.0.0.2", 00:23:50.572 "adrfam": "ipv4", 00:23:50.572 "trsvcid": "4420", 00:23:50.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:50.572 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:50.572 "hdgst": false, 00:23:50.572 "ddgst": false 00:23:50.572 }, 00:23:50.572 "method": "bdev_nvme_attach_controller" 00:23:50.572 },{ 00:23:50.572 "params": { 00:23:50.572 "name": "Nvme1", 00:23:50.572 "trtype": "tcp", 00:23:50.572 "traddr": "10.0.0.2", 00:23:50.572 "adrfam": "ipv4", 00:23:50.572 "trsvcid": "4420", 00:23:50.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.572 "hdgst": false, 00:23:50.572 "ddgst": false 00:23:50.572 }, 00:23:50.572 "method": "bdev_nvme_attach_controller" 00:23:50.572 },{ 00:23:50.572 "params": { 00:23:50.572 "name": "Nvme2", 00:23:50.572 "trtype": "tcp", 00:23:50.572 "traddr": "10.0.0.2", 00:23:50.572 "adrfam": "ipv4", 00:23:50.572 "trsvcid": "4420", 00:23:50.572 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:50.572 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:50.572 "hdgst": false, 00:23:50.572 "ddgst": false 00:23:50.572 }, 00:23:50.572 "method": "bdev_nvme_attach_controller" 00:23:50.572 }' 00:23:50.572 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:50.572 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:50.572 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.572 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:50.572 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:50.572 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:50.572 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:50.572 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:50.572 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:50.572 09:04:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:50.572 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:50.572 ... 00:23:50.572 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:50.572 ... 00:23:50.572 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:50.572 ... 00:23:50.572 fio-3.35 00:23:50.572 Starting 24 threads 00:24:02.802 00:24:02.802 filename0: (groupid=0, jobs=1): err= 0: pid=92659: Wed May 15 09:04:17 2024 00:24:02.802 read: IOPS=160, BW=640KiB/s (656kB/s)(6508KiB/10162msec) 00:24:02.802 slat (usec): min=7, max=8044, avg=26.57, stdev=281.51 00:24:02.802 clat (msec): min=47, max=337, avg=99.77, stdev=37.92 00:24:02.802 lat (msec): min=47, max=337, avg=99.80, stdev=37.92 00:24:02.802 clat percentiles (msec): 00:24:02.802 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 72], 00:24:02.802 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 105], 00:24:02.802 | 70.00th=[ 111], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 157], 00:24:02.802 | 99.00th=[ 194], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:24:02.802 | 99.99th=[ 338] 00:24:02.802 bw ( KiB/s): min= 384, max= 816, per=4.30%, avg=644.35, stdev=99.68, samples=20 00:24:02.802 iops : min= 96, max= 204, avg=161.05, stdev=24.93, samples=20 00:24:02.802 lat (msec) : 50=4.24%, 100=55.13%, 250=39.64%, 500=0.98% 00:24:02.802 cpu : usr=31.92%, sys=1.07%, ctx=879, majf=0, minf=9 00:24:02.802 IO depths : 1=1.4%, 2=3.0%, 4=10.7%, 8=72.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:24:02.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.802 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.802 issued rwts: total=1627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.802 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.802 filename0: (groupid=0, jobs=1): err= 0: pid=92660: Wed May 15 09:04:17 2024 00:24:02.802 read: IOPS=168, BW=675KiB/s (691kB/s)(6860KiB/10169msec) 00:24:02.802 slat (usec): min=8, max=4054, avg=22.37, stdev=168.74 00:24:02.802 clat (msec): min=43, max=275, avg=94.20, stdev=36.16 00:24:02.802 lat (msec): min=43, max=275, avg=94.22, stdev=36.16 00:24:02.802 clat percentiles (msec): 00:24:02.802 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 57], 20.00th=[ 67], 00:24:02.802 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 95], 00:24:02.802 | 70.00th=[ 105], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 157], 00:24:02.802 | 99.00th=[ 224], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:24:02.802 | 99.99th=[ 275] 00:24:02.802 bw ( KiB/s): min= 432, max= 1024, per=4.53%, avg=679.50, stdev=145.09, samples=20 00:24:02.803 iops : min= 108, max= 256, avg=169.85, stdev=36.28, samples=20 00:24:02.803 lat (msec) : 50=5.13%, 100=61.98%, 250=32.30%, 500=0.58% 00:24:02.803 cpu : usr=41.06%, sys=1.49%, ctx=1210, majf=0, minf=9 00:24:02.803 IO depths : 1=1.0%, 2=2.2%, 4=8.7%, 8=75.4%, 16=12.6%, 32=0.0%, >=64=0.0% 00:24:02.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.803 filename0: (groupid=0, jobs=1): err= 0: pid=92661: Wed May 15 09:04:17 2024 00:24:02.803 read: IOPS=158, BW=633KiB/s (648kB/s)(6440KiB/10171msec) 00:24:02.803 slat (usec): min=8, max=8116, avg=46.90, stdev=347.17 00:24:02.803 clat (msec): min=47, max=405, avg=100.74, stdev=35.16 00:24:02.803 lat (msec): min=47, max=405, avg=100.79, stdev=35.17 00:24:02.803 clat percentiles (msec): 00:24:02.803 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 63], 20.00th=[ 72], 00:24:02.803 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 105], 00:24:02.803 | 70.00th=[ 114], 80.00th=[ 124], 90.00th=[ 150], 95.00th=[ 159], 00:24:02.803 | 99.00th=[ 194], 99.50th=[ 251], 99.90th=[ 405], 99.95th=[ 405], 00:24:02.803 | 99.99th=[ 405] 00:24:02.803 bw ( KiB/s): min= 368, max= 896, per=4.25%, avg=637.50, stdev=140.09, samples=20 00:24:02.803 iops : min= 92, max= 224, avg=159.35, stdev=35.05, samples=20 00:24:02.803 lat (msec) : 50=2.36%, 100=53.66%, 250=43.11%, 500=0.87% 00:24:02.803 cpu : usr=43.15%, sys=1.54%, ctx=1286, majf=0, minf=9 00:24:02.803 IO depths : 1=2.2%, 2=4.9%, 4=14.8%, 8=67.3%, 16=10.7%, 32=0.0%, >=64=0.0% 00:24:02.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 complete : 0=0.0%, 4=91.0%, 8=3.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 issued rwts: total=1610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.803 filename0: (groupid=0, jobs=1): err= 0: pid=92662: Wed May 15 09:04:17 2024 00:24:02.803 read: IOPS=143, BW=576KiB/s (590kB/s)(5824KiB/10115msec) 00:24:02.803 slat (usec): min=8, max=8059, avg=28.68, stdev=315.50 00:24:02.803 clat (msec): min=47, max=405, avg=110.91, stdev=41.38 00:24:02.803 lat (msec): min=47, max=405, avg=110.94, stdev=41.38 00:24:02.803 clat percentiles (msec): 00:24:02.803 | 1.00th=[ 51], 5.00th=[ 65], 10.00th=[ 72], 20.00th=[ 84], 00:24:02.803 | 30.00th=[ 95], 40.00th=[ 100], 50.00th=[ 108], 60.00th=[ 117], 00:24:02.803 | 70.00th=[ 123], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 157], 00:24:02.803 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:24:02.803 | 99.99th=[ 405] 00:24:02.803 bw ( KiB/s): min= 256, max= 768, per=3.84%, avg=576.05, stdev=111.22, samples=20 00:24:02.803 iops : min= 64, max= 192, avg=144.00, stdev=27.81, samples=20 00:24:02.803 lat (msec) : 50=0.69%, 100=39.42%, 250=58.79%, 500=1.10% 00:24:02.803 cpu : usr=33.73%, sys=1.37%, ctx=929, majf=0, minf=9 00:24:02.803 IO depths : 1=3.2%, 2=7.5%, 4=19.4%, 8=60.5%, 16=9.5%, 32=0.0%, >=64=0.0% 00:24:02.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.803 filename0: (groupid=0, jobs=1): err= 0: pid=92663: Wed May 15 09:04:17 2024 00:24:02.803 read: IOPS=170, BW=680KiB/s (697kB/s)(6936KiB/10194msec) 00:24:02.803 slat (usec): min=5, max=2305, avg=25.51, stdev=56.83 00:24:02.803 clat (msec): min=3, max=258, avg=93.45, stdev=39.95 00:24:02.803 lat (msec): min=3, max=258, avg=93.47, stdev=39.95 00:24:02.803 clat percentiles (msec): 00:24:02.803 | 1.00th=[ 5], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 66], 00:24:02.803 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 95], 00:24:02.803 | 70.00th=[ 108], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 157], 00:24:02.803 | 99.00th=[ 228], 99.50th=[ 259], 99.90th=[ 259], 99.95th=[ 259], 00:24:02.803 | 99.99th=[ 259] 00:24:02.803 bw ( KiB/s): min= 432, max= 1526, per=4.58%, avg=686.60, stdev=232.99, samples=20 00:24:02.803 iops : min= 108, max= 381, avg=171.60, stdev=58.17, samples=20 00:24:02.803 lat (msec) : 4=0.92%, 10=0.92%, 20=1.85%, 50=6.34%, 100=53.58% 00:24:02.803 lat (msec) : 250=35.81%, 500=0.58% 00:24:02.803 cpu : usr=32.20%, sys=1.22%, ctx=955, majf=0, minf=9 00:24:02.803 IO depths : 1=1.3%, 2=2.8%, 4=10.5%, 8=73.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:24:02.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 issued rwts: total=1734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.803 filename0: (groupid=0, jobs=1): err= 0: pid=92664: Wed May 15 09:04:17 2024 00:24:02.803 read: IOPS=177, BW=708KiB/s (725kB/s)(7220KiB/10193msec) 00:24:02.803 slat (usec): min=7, max=4078, avg=21.64, stdev=134.78 00:24:02.803 clat (msec): min=5, max=286, avg=89.88, stdev=34.54 00:24:02.803 lat (msec): min=5, max=286, avg=89.90, stdev=34.54 00:24:02.803 clat percentiles (msec): 00:24:02.803 | 1.00th=[ 10], 5.00th=[ 47], 10.00th=[ 57], 20.00th=[ 66], 00:24:02.803 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 94], 00:24:02.803 | 70.00th=[ 102], 80.00th=[ 111], 90.00th=[ 125], 95.00th=[ 142], 00:24:02.803 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 288], 99.95th=[ 288], 00:24:02.803 | 99.99th=[ 288] 00:24:02.803 bw ( KiB/s): min= 480, max= 1147, per=4.77%, avg=715.25, stdev=132.70, samples=20 00:24:02.803 iops : min= 120, max= 286, avg=178.75, stdev=33.06, samples=20 00:24:02.803 lat (msec) : 10=1.33%, 20=1.33%, 50=3.66%, 100=62.38%, 250=30.75% 00:24:02.803 lat (msec) : 500=0.55% 00:24:02.803 cpu : usr=37.78%, sys=1.35%, ctx=1285, majf=0, minf=9 00:24:02.803 IO depths : 1=0.9%, 2=2.0%, 4=9.0%, 8=75.1%, 16=13.1%, 32=0.0%, >=64=0.0% 00:24:02.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 complete : 0=0.0%, 4=89.9%, 8=6.0%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 issued rwts: total=1805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.803 filename0: (groupid=0, jobs=1): err= 0: pid=92665: Wed May 15 09:04:17 2024 00:24:02.803 read: IOPS=167, BW=671KiB/s (688kB/s)(6828KiB/10169msec) 00:24:02.803 slat (usec): min=4, max=4049, avg=22.62, stdev=138.42 00:24:02.803 clat (msec): min=37, max=258, avg=94.67, stdev=33.55 00:24:02.803 lat (msec): min=37, max=258, avg=94.69, stdev=33.55 00:24:02.803 clat percentiles (msec): 00:24:02.803 | 1.00th=[ 47], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 70], 00:24:02.803 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 90], 60.00th=[ 96], 00:24:02.803 | 70.00th=[ 107], 80.00th=[ 115], 90.00th=[ 134], 95.00th=[ 157], 00:24:02.803 | 99.00th=[ 230], 99.50th=[ 259], 99.90th=[ 259], 99.95th=[ 259], 00:24:02.803 | 99.99th=[ 259] 00:24:02.803 bw ( KiB/s): min= 472, max= 864, per=4.51%, avg=676.30, stdev=129.23, samples=20 00:24:02.803 iops : min= 118, max= 216, avg=169.05, stdev=32.31, samples=20 00:24:02.803 lat (msec) : 50=4.10%, 100=61.92%, 250=33.39%, 500=0.59% 00:24:02.803 cpu : usr=40.45%, sys=1.49%, ctx=1277, majf=0, minf=9 00:24:02.803 IO depths : 1=1.0%, 2=2.1%, 4=9.4%, 8=74.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:24:02.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 complete : 0=0.0%, 4=89.7%, 8=5.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 issued rwts: total=1707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.803 filename0: (groupid=0, jobs=1): err= 0: pid=92666: Wed May 15 09:04:17 2024 00:24:02.803 read: IOPS=177, BW=711KiB/s (728kB/s)(7248KiB/10196msec) 00:24:02.803 slat (usec): min=7, max=8081, avg=29.91, stdev=340.66 00:24:02.803 clat (msec): min=10, max=248, avg=89.45, stdev=34.78 00:24:02.803 lat (msec): min=10, max=248, avg=89.48, stdev=34.80 00:24:02.803 clat percentiles (msec): 00:24:02.803 | 1.00th=[ 22], 5.00th=[ 42], 10.00th=[ 51], 20.00th=[ 63], 00:24:02.803 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 86], 60.00th=[ 95], 00:24:02.803 | 70.00th=[ 106], 80.00th=[ 112], 90.00th=[ 131], 95.00th=[ 146], 00:24:02.803 | 99.00th=[ 230], 99.50th=[ 247], 99.90th=[ 247], 99.95th=[ 249], 00:24:02.803 | 99.99th=[ 249] 00:24:02.803 bw ( KiB/s): min= 464, max= 1396, per=4.78%, avg=717.70, stdev=204.59, samples=20 00:24:02.803 iops : min= 116, max= 349, avg=179.40, stdev=51.15, samples=20 00:24:02.803 lat (msec) : 20=0.88%, 50=8.50%, 100=56.24%, 250=34.38% 00:24:02.803 cpu : usr=36.64%, sys=1.44%, ctx=1369, majf=0, minf=9 00:24:02.803 IO depths : 1=2.0%, 2=4.1%, 4=12.7%, 8=70.0%, 16=11.1%, 32=0.0%, >=64=0.0% 00:24:02.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 issued rwts: total=1812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.803 filename1: (groupid=0, jobs=1): err= 0: pid=92667: Wed May 15 09:04:17 2024 00:24:02.803 read: IOPS=163, BW=655KiB/s (671kB/s)(6676KiB/10188msec) 00:24:02.803 slat (usec): min=7, max=8057, avg=39.59, stdev=338.29 00:24:02.803 clat (msec): min=12, max=245, avg=96.84, stdev=35.23 00:24:02.803 lat (msec): min=12, max=245, avg=96.88, stdev=35.24 00:24:02.803 clat percentiles (msec): 00:24:02.803 | 1.00th=[ 16], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 71], 00:24:02.803 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 93], 60.00th=[ 102], 00:24:02.803 | 70.00th=[ 112], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 155], 00:24:02.803 | 99.00th=[ 192], 99.50th=[ 245], 99.90th=[ 247], 99.95th=[ 247], 00:24:02.803 | 99.99th=[ 247] 00:24:02.803 bw ( KiB/s): min= 504, max= 1008, per=4.42%, avg=662.50, stdev=116.28, samples=20 00:24:02.803 iops : min= 126, max= 252, avg=165.55, stdev=29.03, samples=20 00:24:02.803 lat (msec) : 20=1.92%, 50=2.04%, 100=56.02%, 250=40.02% 00:24:02.803 cpu : usr=39.35%, sys=1.39%, ctx=1136, majf=0, minf=9 00:24:02.803 IO depths : 1=2.4%, 2=5.8%, 4=16.1%, 8=65.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:24:02.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.803 issued rwts: total=1669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.803 filename1: (groupid=0, jobs=1): err= 0: pid=92668: Wed May 15 09:04:17 2024 00:24:02.803 read: IOPS=162, BW=650KiB/s (666kB/s)(6600KiB/10152msec) 00:24:02.803 slat (usec): min=7, max=8058, avg=22.59, stdev=221.49 00:24:02.803 clat (msec): min=33, max=309, avg=97.55, stdev=36.08 00:24:02.804 lat (msec): min=33, max=309, avg=97.57, stdev=36.08 00:24:02.804 clat percentiles (msec): 00:24:02.804 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:24:02.804 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 95], 60.00th=[ 99], 00:24:02.804 | 70.00th=[ 108], 80.00th=[ 122], 90.00th=[ 136], 95.00th=[ 153], 00:24:02.804 | 99.00th=[ 251], 99.50th=[ 300], 99.90th=[ 309], 99.95th=[ 309], 00:24:02.804 | 99.99th=[ 309] 00:24:02.804 bw ( KiB/s): min= 304, max= 960, per=4.36%, avg=653.50, stdev=136.16, samples=20 00:24:02.804 iops : min= 76, max= 240, avg=163.35, stdev=34.03, samples=20 00:24:02.804 lat (msec) : 50=5.70%, 100=55.82%, 250=37.15%, 500=1.33% 00:24:02.804 cpu : usr=33.82%, sys=1.35%, ctx=950, majf=0, minf=9 00:24:02.804 IO depths : 1=1.6%, 2=3.4%, 4=10.2%, 8=72.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:24:02.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 complete : 0=0.0%, 4=90.2%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 issued rwts: total=1650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.804 filename1: (groupid=0, jobs=1): err= 0: pid=92669: Wed May 15 09:04:17 2024 00:24:02.804 read: IOPS=158, BW=635KiB/s (650kB/s)(6444KiB/10145msec) 00:24:02.804 slat (usec): min=8, max=8066, avg=23.87, stdev=224.37 00:24:02.804 clat (msec): min=47, max=407, avg=100.57, stdev=41.86 00:24:02.804 lat (msec): min=47, max=407, avg=100.60, stdev=41.86 00:24:02.804 clat percentiles (msec): 00:24:02.804 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 72], 00:24:02.804 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 95], 60.00th=[ 105], 00:24:02.804 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 140], 95.00th=[ 157], 00:24:02.804 | 99.00th=[ 190], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:24:02.804 | 99.99th=[ 409] 00:24:02.804 bw ( KiB/s): min= 256, max= 848, per=4.25%, avg=637.90, stdev=134.31, samples=20 00:24:02.804 iops : min= 64, max= 212, avg=159.45, stdev=33.56, samples=20 00:24:02.804 lat (msec) : 50=2.17%, 100=57.23%, 250=39.60%, 500=0.99% 00:24:02.804 cpu : usr=35.36%, sys=1.22%, ctx=869, majf=0, minf=9 00:24:02.804 IO depths : 1=1.6%, 2=3.4%, 4=11.2%, 8=71.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:24:02.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 issued rwts: total=1611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.804 filename1: (groupid=0, jobs=1): err= 0: pid=92670: Wed May 15 09:04:17 2024 00:24:02.804 read: IOPS=126, BW=506KiB/s (519kB/s)(5120KiB/10111msec) 00:24:02.804 slat (usec): min=3, max=8054, avg=29.57, stdev=224.81 00:24:02.804 clat (msec): min=58, max=414, avg=126.02, stdev=47.26 00:24:02.804 lat (msec): min=58, max=414, avg=126.05, stdev=47.26 00:24:02.804 clat percentiles (msec): 00:24:02.804 | 1.00th=[ 60], 5.00th=[ 73], 10.00th=[ 83], 20.00th=[ 93], 00:24:02.804 | 30.00th=[ 97], 40.00th=[ 108], 50.00th=[ 120], 60.00th=[ 132], 00:24:02.804 | 70.00th=[ 144], 80.00th=[ 157], 90.00th=[ 169], 95.00th=[ 192], 00:24:02.804 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:24:02.804 | 99.99th=[ 414] 00:24:02.804 bw ( KiB/s): min= 256, max= 696, per=3.37%, avg=505.15, stdev=95.33, samples=20 00:24:02.804 iops : min= 64, max= 174, avg=126.25, stdev=23.83, samples=20 00:24:02.804 lat (msec) : 100=32.58%, 250=66.17%, 500=1.25% 00:24:02.804 cpu : usr=31.82%, sys=1.21%, ctx=936, majf=0, minf=9 00:24:02.804 IO depths : 1=3.0%, 2=6.5%, 4=17.8%, 8=63.1%, 16=9.6%, 32=0.0%, >=64=0.0% 00:24:02.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 complete : 0=0.0%, 4=91.7%, 8=2.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 issued rwts: total=1280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.804 filename1: (groupid=0, jobs=1): err= 0: pid=92671: Wed May 15 09:04:17 2024 00:24:02.804 read: IOPS=152, BW=612KiB/s (626kB/s)(6216KiB/10162msec) 00:24:02.804 slat (usec): min=5, max=8056, avg=29.64, stdev=292.66 00:24:02.804 clat (msec): min=44, max=345, avg=104.39, stdev=39.65 00:24:02.804 lat (msec): min=44, max=345, avg=104.42, stdev=39.64 00:24:02.804 clat percentiles (msec): 00:24:02.804 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 66], 20.00th=[ 73], 00:24:02.804 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 99], 60.00th=[ 105], 00:24:02.804 | 70.00th=[ 116], 80.00th=[ 127], 90.00th=[ 155], 95.00th=[ 169], 00:24:02.804 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 347], 99.95th=[ 347], 00:24:02.804 | 99.99th=[ 347] 00:24:02.804 bw ( KiB/s): min= 384, max= 784, per=4.10%, avg=615.15, stdev=113.18, samples=20 00:24:02.804 iops : min= 96, max= 196, avg=153.75, stdev=28.29, samples=20 00:24:02.804 lat (msec) : 50=1.16%, 100=51.74%, 250=45.69%, 500=1.42% 00:24:02.804 cpu : usr=36.91%, sys=1.12%, ctx=1056, majf=0, minf=9 00:24:02.804 IO depths : 1=2.8%, 2=6.1%, 4=15.8%, 8=64.9%, 16=10.4%, 32=0.0%, >=64=0.0% 00:24:02.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 complete : 0=0.0%, 4=91.6%, 8=3.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 issued rwts: total=1554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.804 filename1: (groupid=0, jobs=1): err= 0: pid=92672: Wed May 15 09:04:17 2024 00:24:02.804 read: IOPS=141, BW=566KiB/s (580kB/s)(5736KiB/10133msec) 00:24:02.804 slat (usec): min=3, max=8043, avg=30.09, stdev=242.42 00:24:02.804 clat (msec): min=45, max=304, avg=112.80, stdev=41.56 00:24:02.804 lat (msec): min=45, max=304, avg=112.83, stdev=41.56 00:24:02.804 clat percentiles (msec): 00:24:02.804 | 1.00th=[ 48], 5.00th=[ 63], 10.00th=[ 72], 20.00th=[ 84], 00:24:02.804 | 30.00th=[ 91], 40.00th=[ 99], 50.00th=[ 107], 60.00th=[ 112], 00:24:02.804 | 70.00th=[ 124], 80.00th=[ 138], 90.00th=[ 155], 95.00th=[ 192], 00:24:02.804 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:24:02.804 | 99.99th=[ 305] 00:24:02.804 bw ( KiB/s): min= 336, max= 768, per=3.78%, avg=567.20, stdev=120.40, samples=20 00:24:02.804 iops : min= 84, max= 192, avg=141.80, stdev=30.10, samples=20 00:24:02.804 lat (msec) : 50=1.32%, 100=41.14%, 250=55.30%, 500=2.23% 00:24:02.804 cpu : usr=35.94%, sys=1.29%, ctx=1025, majf=0, minf=9 00:24:02.804 IO depths : 1=2.7%, 2=5.7%, 4=15.4%, 8=66.0%, 16=10.1%, 32=0.0%, >=64=0.0% 00:24:02.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 complete : 0=0.0%, 4=91.3%, 8=3.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 issued rwts: total=1434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.804 filename1: (groupid=0, jobs=1): err= 0: pid=92673: Wed May 15 09:04:17 2024 00:24:02.804 read: IOPS=138, BW=555KiB/s (569kB/s)(5616KiB/10115msec) 00:24:02.804 slat (usec): min=8, max=8081, avg=22.74, stdev=215.49 00:24:02.804 clat (msec): min=47, max=257, avg=114.75, stdev=34.66 00:24:02.804 lat (msec): min=47, max=257, avg=114.77, stdev=34.66 00:24:02.804 clat percentiles (msec): 00:24:02.804 | 1.00th=[ 48], 5.00th=[ 72], 10.00th=[ 80], 20.00th=[ 88], 00:24:02.804 | 30.00th=[ 96], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 115], 00:24:02.804 | 70.00th=[ 124], 80.00th=[ 136], 90.00th=[ 159], 95.00th=[ 174], 00:24:02.804 | 99.00th=[ 241], 99.50th=[ 253], 99.90th=[ 257], 99.95th=[ 257], 00:24:02.804 | 99.99th=[ 257] 00:24:02.804 bw ( KiB/s): min= 336, max= 696, per=3.70%, avg=555.10, stdev=96.69, samples=20 00:24:02.804 iops : min= 84, max= 174, avg=138.75, stdev=24.19, samples=20 00:24:02.804 lat (msec) : 50=1.42%, 100=33.90%, 250=63.89%, 500=0.78% 00:24:02.804 cpu : usr=36.29%, sys=1.26%, ctx=1076, majf=0, minf=9 00:24:02.804 IO depths : 1=2.4%, 2=5.4%, 4=15.6%, 8=65.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:24:02.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 issued rwts: total=1404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.804 filename1: (groupid=0, jobs=1): err= 0: pid=92674: Wed May 15 09:04:17 2024 00:24:02.804 read: IOPS=175, BW=701KiB/s (718kB/s)(7140KiB/10188msec) 00:24:02.804 slat (usec): min=4, max=4038, avg=28.39, stdev=95.77 00:24:02.804 clat (msec): min=12, max=336, avg=91.07, stdev=40.60 00:24:02.804 lat (msec): min=12, max=336, avg=91.10, stdev=40.60 00:24:02.804 clat percentiles (msec): 00:24:02.804 | 1.00th=[ 15], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 63], 00:24:02.804 | 30.00th=[ 71], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 95], 00:24:02.804 | 70.00th=[ 104], 80.00th=[ 116], 90.00th=[ 132], 95.00th=[ 146], 00:24:02.804 | 99.00th=[ 207], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:24:02.804 | 99.99th=[ 338] 00:24:02.804 bw ( KiB/s): min= 383, max= 1024, per=4.71%, avg=706.95, stdev=173.37, samples=20 00:24:02.804 iops : min= 95, max= 256, avg=176.65, stdev=43.40, samples=20 00:24:02.804 lat (msec) : 20=1.79%, 50=6.83%, 100=57.93%, 250=32.55%, 500=0.90% 00:24:02.804 cpu : usr=41.59%, sys=1.51%, ctx=1145, majf=0, minf=9 00:24:02.804 IO depths : 1=1.2%, 2=2.6%, 4=10.0%, 8=73.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:24:02.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 issued rwts: total=1785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.804 filename2: (groupid=0, jobs=1): err= 0: pid=92675: Wed May 15 09:04:17 2024 00:24:02.804 read: IOPS=173, BW=696KiB/s (713kB/s)(7072KiB/10162msec) 00:24:02.804 slat (usec): min=3, max=8071, avg=30.95, stdev=330.98 00:24:02.804 clat (msec): min=13, max=407, avg=91.75, stdev=41.32 00:24:02.804 lat (msec): min=13, max=407, avg=91.78, stdev=41.32 00:24:02.804 clat percentiles (msec): 00:24:02.804 | 1.00th=[ 14], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 67], 00:24:02.804 | 30.00th=[ 74], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 94], 00:24:02.804 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 127], 95.00th=[ 144], 00:24:02.804 | 99.00th=[ 205], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:24:02.804 | 99.99th=[ 409] 00:24:02.804 bw ( KiB/s): min= 256, max= 1008, per=4.67%, avg=700.10, stdev=158.58, samples=20 00:24:02.804 iops : min= 64, max= 252, avg=174.95, stdev=39.63, samples=20 00:24:02.804 lat (msec) : 20=1.81%, 50=4.36%, 100=61.54%, 250=31.39%, 500=0.90% 00:24:02.804 cpu : usr=38.73%, sys=1.31%, ctx=1159, majf=0, minf=9 00:24:02.804 IO depths : 1=1.4%, 2=3.7%, 4=12.3%, 8=70.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:24:02.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.804 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 issued rwts: total=1768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.805 filename2: (groupid=0, jobs=1): err= 0: pid=92676: Wed May 15 09:04:17 2024 00:24:02.805 read: IOPS=142, BW=572KiB/s (586kB/s)(5792KiB/10129msec) 00:24:02.805 slat (usec): min=7, max=1044, avg=20.46, stdev=29.38 00:24:02.805 clat (msec): min=48, max=404, avg=111.75, stdev=43.04 00:24:02.805 lat (msec): min=48, max=405, avg=111.77, stdev=43.05 00:24:02.805 clat percentiles (msec): 00:24:02.805 | 1.00th=[ 56], 5.00th=[ 68], 10.00th=[ 74], 20.00th=[ 84], 00:24:02.805 | 30.00th=[ 92], 40.00th=[ 96], 50.00th=[ 105], 60.00th=[ 110], 00:24:02.805 | 70.00th=[ 126], 80.00th=[ 132], 90.00th=[ 155], 95.00th=[ 163], 00:24:02.805 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:24:02.805 | 99.99th=[ 405] 00:24:02.805 bw ( KiB/s): min= 256, max= 784, per=3.82%, avg=572.70, stdev=106.85, samples=20 00:24:02.805 iops : min= 64, max= 196, avg=143.15, stdev=26.71, samples=20 00:24:02.805 lat (msec) : 50=0.55%, 100=44.41%, 250=53.94%, 500=1.10% 00:24:02.805 cpu : usr=41.53%, sys=1.76%, ctx=1356, majf=0, minf=9 00:24:02.805 IO depths : 1=3.6%, 2=7.5%, 4=17.3%, 8=62.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:24:02.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 complete : 0=0.0%, 4=92.0%, 8=2.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 issued rwts: total=1448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.805 filename2: (groupid=0, jobs=1): err= 0: pid=92677: Wed May 15 09:04:17 2024 00:24:02.805 read: IOPS=141, BW=568KiB/s (581kB/s)(5760KiB/10144msec) 00:24:02.805 slat (usec): min=7, max=4059, avg=27.76, stdev=222.68 00:24:02.805 clat (msec): min=47, max=312, avg=112.52, stdev=38.53 00:24:02.805 lat (msec): min=47, max=312, avg=112.55, stdev=38.53 00:24:02.805 clat percentiles (msec): 00:24:02.805 | 1.00th=[ 48], 5.00th=[ 67], 10.00th=[ 72], 20.00th=[ 82], 00:24:02.805 | 30.00th=[ 89], 40.00th=[ 104], 50.00th=[ 111], 60.00th=[ 116], 00:24:02.805 | 70.00th=[ 127], 80.00th=[ 136], 90.00th=[ 148], 95.00th=[ 167], 00:24:02.805 | 99.00th=[ 313], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:24:02.805 | 99.99th=[ 313] 00:24:02.805 bw ( KiB/s): min= 256, max= 768, per=3.80%, avg=569.65, stdev=104.76, samples=20 00:24:02.805 iops : min= 64, max= 192, avg=142.40, stdev=26.19, samples=20 00:24:02.805 lat (msec) : 50=1.39%, 100=36.53%, 250=60.00%, 500=2.08% 00:24:02.805 cpu : usr=42.65%, sys=1.65%, ctx=1259, majf=0, minf=9 00:24:02.805 IO depths : 1=4.4%, 2=9.3%, 4=20.8%, 8=57.4%, 16=8.1%, 32=0.0%, >=64=0.0% 00:24:02.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 complete : 0=0.0%, 4=93.0%, 8=1.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.805 filename2: (groupid=0, jobs=1): err= 0: pid=92678: Wed May 15 09:04:17 2024 00:24:02.805 read: IOPS=138, BW=556KiB/s (569kB/s)(5640KiB/10148msec) 00:24:02.805 slat (usec): min=4, max=6062, avg=36.08, stdev=236.69 00:24:02.805 clat (msec): min=41, max=434, avg=114.59, stdev=42.34 00:24:02.805 lat (msec): min=41, max=434, avg=114.62, stdev=42.34 00:24:02.805 clat percentiles (msec): 00:24:02.805 | 1.00th=[ 52], 5.00th=[ 69], 10.00th=[ 73], 20.00th=[ 91], 00:24:02.805 | 30.00th=[ 96], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 120], 00:24:02.805 | 70.00th=[ 125], 80.00th=[ 136], 90.00th=[ 153], 95.00th=[ 165], 00:24:02.805 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 435], 99.95th=[ 435], 00:24:02.805 | 99.99th=[ 435] 00:24:02.805 bw ( KiB/s): min= 256, max= 768, per=3.72%, avg=557.45, stdev=104.61, samples=20 00:24:02.805 iops : min= 64, max= 192, avg=139.35, stdev=26.16, samples=20 00:24:02.805 lat (msec) : 50=0.92%, 100=35.04%, 250=62.91%, 500=1.13% 00:24:02.805 cpu : usr=40.69%, sys=1.56%, ctx=1318, majf=0, minf=9 00:24:02.805 IO depths : 1=3.4%, 2=7.6%, 4=18.4%, 8=61.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:24:02.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 issued rwts: total=1410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.805 filename2: (groupid=0, jobs=1): err= 0: pid=92679: Wed May 15 09:04:17 2024 00:24:02.805 read: IOPS=180, BW=723KiB/s (740kB/s)(7368KiB/10194msec) 00:24:02.805 slat (usec): min=4, max=8054, avg=30.54, stdev=229.81 00:24:02.805 clat (msec): min=6, max=256, avg=87.77, stdev=33.90 00:24:02.805 lat (msec): min=6, max=256, avg=87.80, stdev=33.91 00:24:02.805 clat percentiles (msec): 00:24:02.805 | 1.00th=[ 11], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 64], 00:24:02.805 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 92], 00:24:02.805 | 70.00th=[ 101], 80.00th=[ 109], 90.00th=[ 125], 95.00th=[ 140], 00:24:02.805 | 99.00th=[ 243], 99.50th=[ 257], 99.90th=[ 257], 99.95th=[ 257], 00:24:02.805 | 99.99th=[ 257] 00:24:02.805 bw ( KiB/s): min= 432, max= 1149, per=4.87%, avg=730.15, stdev=155.13, samples=20 00:24:02.805 iops : min= 108, max= 287, avg=182.50, stdev=38.74, samples=20 00:24:02.805 lat (msec) : 10=0.87%, 20=1.74%, 50=4.94%, 100=62.38%, 250=29.53% 00:24:02.805 lat (msec) : 500=0.54% 00:24:02.805 cpu : usr=39.34%, sys=1.47%, ctx=1185, majf=0, minf=9 00:24:02.805 IO depths : 1=0.6%, 2=1.3%, 4=7.2%, 8=77.5%, 16=13.4%, 32=0.0%, >=64=0.0% 00:24:02.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 complete : 0=0.0%, 4=89.5%, 8=6.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 issued rwts: total=1842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.805 filename2: (groupid=0, jobs=1): err= 0: pid=92680: Wed May 15 09:04:17 2024 00:24:02.805 read: IOPS=141, BW=567KiB/s (580kB/s)(5748KiB/10141msec) 00:24:02.805 slat (usec): min=7, max=8069, avg=25.68, stdev=300.10 00:24:02.805 clat (msec): min=47, max=305, avg=112.65, stdev=39.88 00:24:02.805 lat (msec): min=47, max=305, avg=112.67, stdev=39.90 00:24:02.805 clat percentiles (msec): 00:24:02.805 | 1.00th=[ 48], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 85], 00:24:02.805 | 30.00th=[ 93], 40.00th=[ 100], 50.00th=[ 108], 60.00th=[ 116], 00:24:02.805 | 70.00th=[ 121], 80.00th=[ 129], 90.00th=[ 150], 95.00th=[ 192], 00:24:02.805 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:24:02.805 | 99.99th=[ 305] 00:24:02.805 bw ( KiB/s): min= 256, max= 640, per=3.79%, avg=568.40, stdev=92.66, samples=20 00:24:02.805 iops : min= 64, max= 160, avg=142.10, stdev=23.17, samples=20 00:24:02.805 lat (msec) : 50=1.53%, 100=40.43%, 250=55.81%, 500=2.23% 00:24:02.805 cpu : usr=36.37%, sys=1.34%, ctx=1046, majf=0, minf=9 00:24:02.805 IO depths : 1=3.4%, 2=7.1%, 4=16.9%, 8=63.3%, 16=9.3%, 32=0.0%, >=64=0.0% 00:24:02.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 complete : 0=0.0%, 4=91.9%, 8=2.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 issued rwts: total=1437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.805 filename2: (groupid=0, jobs=1): err= 0: pid=92681: Wed May 15 09:04:17 2024 00:24:02.805 read: IOPS=143, BW=574KiB/s (588kB/s)(5808KiB/10120msec) 00:24:02.805 slat (usec): min=3, max=9062, avg=40.43, stdev=380.83 00:24:02.805 clat (msec): min=33, max=405, avg=111.22, stdev=45.28 00:24:02.805 lat (msec): min=33, max=405, avg=111.26, stdev=45.28 00:24:02.805 clat percentiles (msec): 00:24:02.805 | 1.00th=[ 52], 5.00th=[ 63], 10.00th=[ 72], 20.00th=[ 77], 00:24:02.805 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 107], 60.00th=[ 117], 00:24:02.805 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 159], 95.00th=[ 176], 00:24:02.805 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:24:02.805 | 99.99th=[ 405] 00:24:02.805 bw ( KiB/s): min= 256, max= 816, per=3.83%, avg=574.15, stdev=134.98, samples=20 00:24:02.805 iops : min= 64, max= 204, avg=143.50, stdev=33.77, samples=20 00:24:02.805 lat (msec) : 50=0.96%, 100=45.45%, 250=52.48%, 500=1.10% 00:24:02.805 cpu : usr=31.94%, sys=1.12%, ctx=950, majf=0, minf=9 00:24:02.805 IO depths : 1=2.1%, 2=5.3%, 4=15.0%, 8=66.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:24:02.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 complete : 0=0.0%, 4=91.6%, 8=3.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 issued rwts: total=1452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.805 filename2: (groupid=0, jobs=1): err= 0: pid=92682: Wed May 15 09:04:17 2024 00:24:02.805 read: IOPS=153, BW=614KiB/s (629kB/s)(6244KiB/10163msec) 00:24:02.805 slat (usec): min=7, max=122, avg=15.78, stdev=10.15 00:24:02.805 clat (msec): min=36, max=379, avg=103.52, stdev=37.35 00:24:02.805 lat (msec): min=36, max=379, avg=103.54, stdev=37.35 00:24:02.805 clat percentiles (msec): 00:24:02.805 | 1.00th=[ 48], 5.00th=[ 58], 10.00th=[ 67], 20.00th=[ 72], 00:24:02.805 | 30.00th=[ 84], 40.00th=[ 91], 50.00th=[ 96], 60.00th=[ 108], 00:24:02.805 | 70.00th=[ 120], 80.00th=[ 128], 90.00th=[ 146], 95.00th=[ 165], 00:24:02.805 | 99.00th=[ 266], 99.50th=[ 271], 99.90th=[ 380], 99.95th=[ 380], 00:24:02.805 | 99.99th=[ 380] 00:24:02.805 bw ( KiB/s): min= 352, max= 864, per=4.12%, avg=617.90, stdev=137.27, samples=20 00:24:02.805 iops : min= 88, max= 216, avg=154.45, stdev=34.34, samples=20 00:24:02.805 lat (msec) : 50=1.28%, 100=52.98%, 250=44.71%, 500=1.02% 00:24:02.805 cpu : usr=35.48%, sys=1.34%, ctx=1226, majf=0, minf=9 00:24:02.805 IO depths : 1=2.4%, 2=4.9%, 4=12.7%, 8=69.3%, 16=10.7%, 32=0.0%, >=64=0.0% 00:24:02.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 complete : 0=0.0%, 4=90.6%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.805 issued rwts: total=1561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:02.805 00:24:02.805 Run status group 0 (all jobs): 00:24:02.805 READ: bw=14.6MiB/s (15.3MB/s), 506KiB/s-723KiB/s (519kB/s-740kB/s), io=149MiB (157MB), run=10111-10196msec 00:24:02.805 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:02.805 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:02.805 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:02.805 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:02.805 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:02.805 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:02.805 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.805 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.805 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 bdev_null0 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 [2024-05-15 09:04:17.912607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 bdev_null1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.806 { 00:24:02.806 "params": { 00:24:02.806 "name": "Nvme$subsystem", 00:24:02.806 "trtype": "$TEST_TRANSPORT", 00:24:02.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.806 "adrfam": "ipv4", 00:24:02.806 "trsvcid": "$NVMF_PORT", 00:24:02.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.806 "hdgst": ${hdgst:-false}, 00:24:02.806 "ddgst": ${ddgst:-false} 00:24:02.806 }, 00:24:02.806 "method": "bdev_nvme_attach_controller" 00:24:02.806 } 00:24:02.806 EOF 00:24:02.806 )") 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.806 { 00:24:02.806 "params": { 00:24:02.806 "name": "Nvme$subsystem", 00:24:02.806 "trtype": "$TEST_TRANSPORT", 00:24:02.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.806 "adrfam": "ipv4", 00:24:02.806 "trsvcid": "$NVMF_PORT", 00:24:02.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.806 "hdgst": ${hdgst:-false}, 00:24:02.806 "ddgst": ${ddgst:-false} 00:24:02.806 }, 00:24:02.806 "method": "bdev_nvme_attach_controller" 00:24:02.806 } 00:24:02.806 EOF 00:24:02.806 )") 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:24:02.806 09:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:24:02.807 09:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:02.807 "params": { 00:24:02.807 "name": "Nvme0", 00:24:02.807 "trtype": "tcp", 00:24:02.807 "traddr": "10.0.0.2", 00:24:02.807 "adrfam": "ipv4", 00:24:02.807 "trsvcid": "4420", 00:24:02.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:02.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:02.807 "hdgst": false, 00:24:02.807 "ddgst": false 00:24:02.807 }, 00:24:02.807 "method": "bdev_nvme_attach_controller" 00:24:02.807 },{ 00:24:02.807 "params": { 00:24:02.807 "name": "Nvme1", 00:24:02.807 "trtype": "tcp", 00:24:02.807 "traddr": "10.0.0.2", 00:24:02.807 "adrfam": "ipv4", 00:24:02.807 "trsvcid": "4420", 00:24:02.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.807 "hdgst": false, 00:24:02.807 "ddgst": false 00:24:02.807 }, 00:24:02.807 "method": "bdev_nvme_attach_controller" 00:24:02.807 }' 00:24:02.807 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:02.807 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:02.807 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.807 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.807 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:02.807 09:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:02.807 09:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:02.807 09:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:02.807 09:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:02.807 09:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.807 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:02.807 ... 00:24:02.807 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:02.807 ... 00:24:02.807 fio-3.35 00:24:02.807 Starting 4 threads 00:24:08.081 00:24:08.081 filename0: (groupid=0, jobs=1): err= 0: pid=92810: Wed May 15 09:04:23 2024 00:24:08.081 read: IOPS=1602, BW=12.5MiB/s (13.1MB/s)(62.6MiB/5003msec) 00:24:08.081 slat (nsec): min=4771, max=54880, avg=11696.73, stdev=4753.29 00:24:08.081 clat (usec): min=2219, max=9490, avg=4932.43, stdev=828.29 00:24:08.081 lat (usec): min=2231, max=9499, avg=4944.13, stdev=828.06 00:24:08.081 clat percentiles (usec): 00:24:08.081 | 1.00th=[ 3130], 5.00th=[ 4146], 10.00th=[ 4178], 20.00th=[ 4228], 00:24:08.081 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4621], 60.00th=[ 5276], 00:24:08.082 | 70.00th=[ 5473], 80.00th=[ 5735], 90.00th=[ 5866], 95.00th=[ 6325], 00:24:08.082 | 99.00th=[ 6783], 99.50th=[ 7898], 99.90th=[ 8586], 99.95th=[ 9241], 00:24:08.082 | 99.99th=[ 9503] 00:24:08.082 bw ( KiB/s): min=10352, max=14336, per=24.61%, avg=12615.11, stdev=1627.15, samples=9 00:24:08.082 iops : min= 1294, max= 1792, avg=1576.89, stdev=203.39, samples=9 00:24:08.082 lat (msec) : 4=1.46%, 10=98.54% 00:24:08.082 cpu : usr=92.36%, sys=5.94%, ctx=10, majf=0, minf=9 00:24:08.082 IO depths : 1=8.1%, 2=25.0%, 4=50.0%, 8=16.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:08.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.082 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.082 issued rwts: total=8016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.082 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:08.082 filename0: (groupid=0, jobs=1): err= 0: pid=92811: Wed May 15 09:04:23 2024 00:24:08.082 read: IOPS=1602, BW=12.5MiB/s (13.1MB/s)(62.6MiB/5002msec) 00:24:08.082 slat (usec): min=5, max=185, avg=14.53, stdev= 5.94 00:24:08.082 clat (usec): min=2760, max=8069, avg=4920.82, stdev=770.66 00:24:08.082 lat (usec): min=2766, max=8084, avg=4935.36, stdev=769.55 00:24:08.082 clat percentiles (usec): 00:24:08.082 | 1.00th=[ 4047], 5.00th=[ 4113], 10.00th=[ 4146], 20.00th=[ 4228], 00:24:08.082 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4621], 60.00th=[ 5211], 00:24:08.082 | 70.00th=[ 5473], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 6325], 00:24:08.082 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7373], 99.95th=[ 7898], 00:24:08.082 | 99.99th=[ 8094] 00:24:08.082 bw ( KiB/s): min=10496, max=14336, per=24.64%, avg=12629.33, stdev=1603.84, samples=9 00:24:08.082 iops : min= 1312, max= 1792, avg=1578.67, stdev=200.48, samples=9 00:24:08.082 lat (msec) : 4=0.69%, 10=99.31% 00:24:08.082 cpu : usr=92.50%, sys=5.84%, ctx=18, majf=0, minf=0 00:24:08.082 IO depths : 1=8.3%, 2=25.0%, 4=50.0%, 8=16.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:08.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.082 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.082 issued rwts: total=8016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.082 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:08.082 filename1: (groupid=0, jobs=1): err= 0: pid=92812: Wed May 15 09:04:23 2024 00:24:08.082 read: IOPS=1602, BW=12.5MiB/s (13.1MB/s)(62.6MiB/5002msec) 00:24:08.082 slat (nsec): min=3825, max=49560, avg=10927.04, stdev=4696.00 00:24:08.082 clat (usec): min=2418, max=8396, avg=4934.43, stdev=774.32 00:24:08.082 lat (usec): min=2426, max=8404, avg=4945.36, stdev=774.51 00:24:08.082 clat percentiles (usec): 00:24:08.082 | 1.00th=[ 4015], 5.00th=[ 4146], 10.00th=[ 4178], 20.00th=[ 4228], 00:24:08.082 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4555], 60.00th=[ 5276], 00:24:08.082 | 70.00th=[ 5473], 80.00th=[ 5669], 90.00th=[ 5932], 95.00th=[ 6259], 00:24:08.082 | 99.00th=[ 6718], 99.50th=[ 7242], 99.90th=[ 8094], 99.95th=[ 8160], 00:24:08.082 | 99.99th=[ 8455] 00:24:08.082 bw ( KiB/s): min=10368, max=14435, per=24.66%, avg=12638.56, stdev=1659.16, samples=9 00:24:08.082 iops : min= 1296, max= 1804, avg=1579.78, stdev=207.34, samples=9 00:24:08.082 lat (msec) : 4=0.95%, 10=99.05% 00:24:08.082 cpu : usr=92.88%, sys=5.58%, ctx=12, majf=0, minf=0 00:24:08.082 IO depths : 1=8.9%, 2=25.0%, 4=50.0%, 8=16.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:08.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.082 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.082 issued rwts: total=8016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.082 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:08.082 filename1: (groupid=0, jobs=1): err= 0: pid=92813: Wed May 15 09:04:23 2024 00:24:08.082 read: IOPS=1600, BW=12.5MiB/s (13.1MB/s)(62.6MiB/5002msec) 00:24:08.082 slat (nsec): min=3839, max=62084, avg=13962.09, stdev=5025.43 00:24:08.082 clat (usec): min=2234, max=10355, avg=4928.55, stdev=843.22 00:24:08.082 lat (usec): min=2242, max=10364, avg=4942.51, stdev=843.06 00:24:08.082 clat percentiles (usec): 00:24:08.082 | 1.00th=[ 3392], 5.00th=[ 4113], 10.00th=[ 4146], 20.00th=[ 4228], 00:24:08.082 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4555], 60.00th=[ 5211], 00:24:08.082 | 70.00th=[ 5473], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 6390], 00:24:08.082 | 99.00th=[ 7570], 99.50th=[ 8291], 99.90th=[ 9110], 99.95th=[ 9503], 00:24:08.082 | 99.99th=[10421] 00:24:08.082 bw ( KiB/s): min=10368, max=14320, per=24.61%, avg=12613.33, stdev=1629.76, samples=9 00:24:08.082 iops : min= 1296, max= 1790, avg=1576.67, stdev=203.72, samples=9 00:24:08.082 lat (msec) : 4=1.84%, 10=98.14%, 20=0.02% 00:24:08.082 cpu : usr=92.14%, sys=6.22%, ctx=814, majf=0, minf=9 00:24:08.082 IO depths : 1=7.6%, 2=25.0%, 4=50.0%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:08.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.082 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.082 issued rwts: total=8008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.082 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:08.082 00:24:08.082 Run status group 0 (all jobs): 00:24:08.082 READ: bw=50.1MiB/s (52.5MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=250MiB (263MB), run=5002-5003msec 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.082 ************************************ 00:24:08.082 END TEST fio_dif_rand_params 00:24:08.082 ************************************ 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.082 00:24:08.082 real 0m23.504s 00:24:08.082 user 2m5.575s 00:24:08.082 sys 0m6.168s 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:08.082 09:04:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.082 09:04:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:08.082 09:04:24 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:08.082 09:04:24 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:08.082 09:04:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:08.082 ************************************ 00:24:08.082 START TEST fio_dif_digest 00:24:08.082 ************************************ 00:24:08.082 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:24:08.082 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:08.082 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:08.082 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:08.082 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:08.082 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:08.082 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:08.082 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:08.082 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:08.082 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:08.083 bdev_null0 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:08.083 [2024-05-15 09:04:24.059808] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:08.083 { 00:24:08.083 "params": { 00:24:08.083 "name": "Nvme$subsystem", 00:24:08.083 "trtype": "$TEST_TRANSPORT", 00:24:08.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.083 "adrfam": "ipv4", 00:24:08.083 "trsvcid": "$NVMF_PORT", 00:24:08.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.083 "hdgst": ${hdgst:-false}, 00:24:08.083 "ddgst": ${ddgst:-false} 00:24:08.083 }, 00:24:08.083 "method": "bdev_nvme_attach_controller" 00:24:08.083 } 00:24:08.083 EOF 00:24:08.083 )") 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:08.083 "params": { 00:24:08.083 "name": "Nvme0", 00:24:08.083 "trtype": "tcp", 00:24:08.083 "traddr": "10.0.0.2", 00:24:08.083 "adrfam": "ipv4", 00:24:08.083 "trsvcid": "4420", 00:24:08.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:08.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:08.083 "hdgst": true, 00:24:08.083 "ddgst": true 00:24:08.083 }, 00:24:08.083 "method": "bdev_nvme_attach_controller" 00:24:08.083 }' 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:08.083 09:04:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:08.083 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:08.083 ... 00:24:08.083 fio-3.35 00:24:08.083 Starting 3 threads 00:24:20.280 00:24:20.280 filename0: (groupid=0, jobs=1): err= 0: pid=92918: Wed May 15 09:04:34 2024 00:24:20.280 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(239MiB/10006msec) 00:24:20.280 slat (nsec): min=4647, max=72167, avg=19863.18, stdev=8508.42 00:24:20.280 clat (usec): min=7920, max=46682, avg=15655.36, stdev=3354.59 00:24:20.280 lat (usec): min=7951, max=46740, avg=15675.23, stdev=3357.04 00:24:20.280 clat percentiles (usec): 00:24:20.280 | 1.00th=[ 8979], 5.00th=[12649], 10.00th=[13304], 20.00th=[14091], 00:24:20.280 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15270], 60.00th=[15664], 00:24:20.280 | 70.00th=[16188], 80.00th=[16909], 90.00th=[17957], 95.00th=[19268], 00:24:20.280 | 99.00th=[32375], 99.50th=[42730], 99.90th=[46400], 99.95th=[46924], 00:24:20.280 | 99.99th=[46924] 00:24:20.280 bw ( KiB/s): min=16384, max=27136, per=33.91%, avg=24475.90, stdev=2623.34, samples=20 00:24:20.280 iops : min= 128, max= 212, avg=191.20, stdev=20.50, samples=20 00:24:20.280 lat (msec) : 10=2.30%, 20=94.15%, 50=3.55% 00:24:20.280 cpu : usr=91.11%, sys=6.80%, ctx=110, majf=0, minf=0 00:24:20.280 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:20.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.280 issued rwts: total=1914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.280 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:20.280 filename0: (groupid=0, jobs=1): err= 0: pid=92919: Wed May 15 09:04:34 2024 00:24:20.280 read: IOPS=161, BW=20.2MiB/s (21.1MB/s)(202MiB/10006msec) 00:24:20.280 slat (nsec): min=5002, max=87239, avg=19611.30, stdev=7606.85 00:24:20.280 clat (usec): min=6851, max=55634, avg=18570.79, stdev=3656.92 00:24:20.280 lat (usec): min=6874, max=55643, avg=18590.40, stdev=3657.75 00:24:20.280 clat percentiles (usec): 00:24:20.280 | 1.00th=[11076], 5.00th=[15533], 10.00th=[16450], 20.00th=[16909], 00:24:20.280 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18482], 00:24:20.280 | 70.00th=[19006], 80.00th=[19530], 90.00th=[21103], 95.00th=[22938], 00:24:20.280 | 99.00th=[35390], 99.50th=[45351], 99.90th=[53216], 99.95th=[55837], 00:24:20.280 | 99.99th=[55837] 00:24:20.280 bw ( KiB/s): min=14336, max=22784, per=28.59%, avg=20635.50, stdev=1885.93, samples=20 00:24:20.280 iops : min= 112, max= 178, avg=161.20, stdev=14.75, samples=20 00:24:20.280 lat (msec) : 10=0.12%, 20=83.89%, 50=15.74%, 100=0.25% 00:24:20.280 cpu : usr=91.79%, sys=6.63%, ctx=57, majf=0, minf=9 00:24:20.280 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:20.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.280 issued rwts: total=1614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.280 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:20.280 filename0: (groupid=0, jobs=1): err= 0: pid=92920: Wed May 15 09:04:34 2024 00:24:20.280 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(264MiB/10006msec) 00:24:20.280 slat (usec): min=4, max=100, avg=20.01, stdev= 8.40 00:24:20.280 clat (usec): min=6094, max=55746, avg=14170.92, stdev=4471.46 00:24:20.280 lat (usec): min=6124, max=55763, avg=14190.93, stdev=4472.19 00:24:20.280 clat percentiles (usec): 00:24:20.280 | 1.00th=[10945], 5.00th=[11600], 10.00th=[11863], 20.00th=[12387], 00:24:20.280 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:24:20.280 | 70.00th=[14091], 80.00th=[14746], 90.00th=[16450], 95.00th=[18220], 00:24:20.280 | 99.00th=[39584], 99.50th=[52691], 99.90th=[54264], 99.95th=[55313], 00:24:20.280 | 99.99th=[55837] 00:24:20.280 bw ( KiB/s): min=18432, max=29952, per=37.32%, avg=26933.89, stdev=2867.99, samples=19 00:24:20.280 iops : min= 144, max= 234, avg=210.42, stdev=22.41, samples=19 00:24:20.280 lat (msec) : 10=0.05%, 20=97.78%, 50=1.47%, 100=0.71% 00:24:20.280 cpu : usr=91.38%, sys=6.70%, ctx=19, majf=0, minf=9 00:24:20.280 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:20.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.280 issued rwts: total=2114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.280 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:20.280 00:24:20.280 Run status group 0 (all jobs): 00:24:20.280 READ: bw=70.5MiB/s (73.9MB/s), 20.2MiB/s-26.4MiB/s (21.1MB/s-27.7MB/s), io=705MiB (740MB), run=10006-10006msec 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.280 00:24:20.280 real 0m10.881s 00:24:20.280 user 0m28.000s 00:24:20.280 sys 0m2.249s 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:20.280 ************************************ 00:24:20.280 END TEST fio_dif_digest 00:24:20.280 ************************************ 00:24:20.280 09:04:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:20.280 09:04:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:20.280 09:04:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:20.280 09:04:34 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:20.280 09:04:34 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:24:20.280 09:04:34 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:20.280 09:04:34 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:24:20.280 09:04:34 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:20.281 09:04:34 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:20.281 rmmod nvme_tcp 00:24:20.281 rmmod nvme_fabrics 00:24:20.281 rmmod nvme_keyring 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 92174 ']' 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 92174 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 92174 ']' 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 92174 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92174 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:20.281 killing process with pid 92174 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92174' 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@965 -- # kill 92174 00:24:20.281 [2024-05-15 09:04:35.059235] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@970 -- # wait 92174 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:20.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:20.281 Waiting for block devices as requested 00:24:20.281 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:20.281 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.281 09:04:35 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:20.281 00:24:20.281 real 0m58.597s 00:24:20.281 user 3m48.201s 00:24:20.281 sys 0m16.178s 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:20.281 ************************************ 00:24:20.281 09:04:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:20.281 END TEST nvmf_dif 00:24:20.281 ************************************ 00:24:20.281 09:04:35 -- spdk/autotest.sh@12 -- # hostname 00:24:20.281 09:04:35 -- spdk/autotest.sh@12 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/nvmf_dif.info 00:24:20.281 geninfo: WARNING: invalid characters removed from testname! 00:24:46.843 ### URING mentions in coverage after the test ###: 00:24:46.843 09:05:02 -- spdk/autotest.sh@13 -- # echo '### URING mentions in coverage after the test ###:' 00:24:46.843 09:05:02 -- spdk/autotest.sh@14 -- # cat /home/vagrant/spdk_repo/spdk/../output/nvmf_dif.info 00:24:46.843 09:05:02 -- spdk/autotest.sh@14 -- # grep -i uring 00:24:46.843 09:05:02 -- spdk/autotest.sh@15 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_dif.info 00:24:46.843 09:05:02 -- spdk/autotest.sh@302 -- # run_test_wrapper nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:46.843 09:05:02 -- spdk/autotest.sh@10 -- # local test_name=nvmf_abort_qd_sizes 00:24:46.843 09:05:02 -- spdk/autotest.sh@11 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:46.843 09:05:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:46.843 09:05:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:46.843 09:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:46.843 ************************************ 00:24:46.843 START TEST nvmf_abort_qd_sizes 00:24:46.843 ************************************ 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:46.843 * Looking for test storage... 00:24:46.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.843 09:05:02 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:46.844 Cannot find device "nvmf_tgt_br" 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:46.844 Cannot find device "nvmf_tgt_br2" 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:46.844 Cannot find device "nvmf_tgt_br" 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:24:46.844 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:47.102 Cannot find device "nvmf_tgt_br2" 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:47.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:47.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:47.102 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:47.103 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:47.103 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:47.103 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:47.103 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:47.103 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:47.103 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:47.103 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:47.103 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:47.103 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:47.103 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:47.362 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:47.362 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:47.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:24:47.362 00:24:47.362 --- 10.0.0.2 ping statistics --- 00:24:47.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.362 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:47.362 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:47.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:47.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:24:47.362 00:24:47.362 --- 10.0.0.3 ping statistics --- 00:24:47.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.362 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:24:47.362 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:47.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:24:47.362 00:24:47.362 --- 10.0.0.1 ping statistics --- 00:24:47.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.362 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:24:47.362 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.362 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:24:47.362 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:47.362 09:05:03 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:47.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:47.929 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:47.929 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=94104 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 94104 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 94104 ']' 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:48.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:48.187 09:05:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:48.187 [2024-05-15 09:05:04.266622] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:24:48.187 [2024-05-15 09:05:04.266719] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.187 [2024-05-15 09:05:04.411143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:48.447 [2024-05-15 09:05:04.482378] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.447 [2024-05-15 09:05:04.482678] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.447 [2024-05-15 09:05:04.482792] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.447 [2024-05-15 09:05:04.482909] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.447 [2024-05-15 09:05:04.482988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.447 [2024-05-15 09:05:04.483242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.447 [2024-05-15 09:05:04.483364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.447 [2024-05-15 09:05:04.483972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:48.447 [2024-05-15 09:05:04.483986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.401 09:05:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:49.401 09:05:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:24:49.401 09:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:49.401 09:05:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:49.402 09:05:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:49.402 ************************************ 00:24:49.402 START TEST spdk_target_abort 00:24:49.402 ************************************ 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:49.402 spdk_targetn1 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:49.402 [2024-05-15 09:05:05.556912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:49.402 [2024-05-15 09:05:05.584842] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:49.402 [2024-05-15 09:05:05.585090] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:49.402 09:05:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:52.686 Initializing NVMe Controllers 00:24:52.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:52.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:52.686 Initialization complete. Launching workers. 00:24:52.686 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11881, failed: 0 00:24:52.686 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1017, failed to submit 10864 00:24:52.686 success 753, unsuccess 264, failed 0 00:24:52.686 09:05:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:52.686 09:05:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:55.970 [2024-05-15 09:05:12.038606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2449520 is same with the state(5) to be set 00:24:55.970 [2024-05-15 09:05:12.038661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2449520 is same with the state(5) to be set 00:24:55.970 Initializing NVMe Controllers 00:24:55.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:55.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:55.970 Initialization complete. Launching workers. 00:24:55.970 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5987, failed: 0 00:24:55.970 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1268, failed to submit 4719 00:24:55.970 success 242, unsuccess 1026, failed 0 00:24:55.970 09:05:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:55.970 09:05:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:59.261 Initializing NVMe Controllers 00:24:59.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:59.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:59.261 Initialization complete. Launching workers. 00:24:59.261 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30684, failed: 0 00:24:59.261 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2622, failed to submit 28062 00:24:59.261 success 469, unsuccess 2153, failed 0 00:24:59.261 09:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:59.261 09:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.261 09:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:59.261 09:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.261 09:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:59.261 09:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.261 09:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 94104 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 94104 ']' 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 94104 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94104 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:00.635 killing process with pid 94104 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94104' 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 94104 00:25:00.635 [2024-05-15 09:05:16.464652] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 94104 00:25:00.635 00:25:00.635 real 0m11.188s 00:25:00.635 user 0m46.388s 00:25:00.635 sys 0m1.641s 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:00.635 ************************************ 00:25:00.635 END TEST spdk_target_abort 00:25:00.635 ************************************ 00:25:00.635 09:05:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:25:00.635 09:05:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:00.635 09:05:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:00.635 09:05:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:00.635 ************************************ 00:25:00.635 START TEST kernel_target_abort 00:25:00.635 ************************************ 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:00.635 09:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:00.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.892 Waiting for block devices as requested 00:25:01.150 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:01.150 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:01.150 No valid GPT data, bailing 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:01.150 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:25:01.151 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:01.151 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:01.151 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:01.151 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:01.151 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:01.409 No valid GPT data, bailing 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:01.409 No valid GPT data, bailing 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:01.409 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:01.410 No valid GPT data, bailing 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe --hostid=3dd9b97a-2f76-4777-857d-568f88228ebe -a 10.0.0.1 -t tcp -s 4420 00:25:01.410 00:25:01.410 Discovery Log Number of Records 2, Generation counter 2 00:25:01.410 =====Discovery Log Entry 0====== 00:25:01.410 trtype: tcp 00:25:01.410 adrfam: ipv4 00:25:01.410 subtype: current discovery subsystem 00:25:01.410 treq: not specified, sq flow control disable supported 00:25:01.410 portid: 1 00:25:01.410 trsvcid: 4420 00:25:01.410 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:01.410 traddr: 10.0.0.1 00:25:01.410 eflags: none 00:25:01.410 sectype: none 00:25:01.410 =====Discovery Log Entry 1====== 00:25:01.410 trtype: tcp 00:25:01.410 adrfam: ipv4 00:25:01.410 subtype: nvme subsystem 00:25:01.410 treq: not specified, sq flow control disable supported 00:25:01.410 portid: 1 00:25:01.410 trsvcid: 4420 00:25:01.410 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:01.410 traddr: 10.0.0.1 00:25:01.410 eflags: none 00:25:01.410 sectype: none 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:01.410 09:05:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:04.700 Initializing NVMe Controllers 00:25:04.700 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:04.700 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:04.700 Initialization complete. Launching workers. 00:25:04.700 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34348, failed: 0 00:25:04.700 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34348, failed to submit 0 00:25:04.700 success 0, unsuccess 34348, failed 0 00:25:04.700 09:05:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:04.700 09:05:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:07.981 Initializing NVMe Controllers 00:25:07.981 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:07.981 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:07.981 Initialization complete. Launching workers. 00:25:07.981 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67134, failed: 0 00:25:07.981 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29120, failed to submit 38014 00:25:07.981 success 0, unsuccess 29120, failed 0 00:25:07.981 09:05:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:07.981 09:05:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:11.331 Initializing NVMe Controllers 00:25:11.331 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:11.331 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:11.331 Initialization complete. Launching workers. 00:25:11.331 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76740, failed: 0 00:25:11.331 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19162, failed to submit 57578 00:25:11.331 success 0, unsuccess 19162, failed 0 00:25:11.331 09:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:25:11.331 09:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:11.331 09:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:25:11.331 09:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:11.331 09:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:11.331 09:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:11.331 09:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:11.331 09:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:11.331 09:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:11.331 09:05:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:11.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:13.277 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:13.535 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:13.535 00:25:13.535 real 0m12.873s 00:25:13.535 user 0m6.309s 00:25:13.535 sys 0m3.976s 00:25:13.535 ************************************ 00:25:13.535 END TEST kernel_target_abort 00:25:13.535 ************************************ 00:25:13.535 09:05:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:13.535 09:05:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.536 rmmod nvme_tcp 00:25:13.536 rmmod nvme_fabrics 00:25:13.536 rmmod nvme_keyring 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 94104 ']' 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 94104 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 94104 ']' 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 94104 00:25:13.536 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (94104) - No such process 00:25:13.536 Process with pid 94104 is not found 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 94104 is not found' 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:25:13.536 09:05:29 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:13.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:14.053 Waiting for block devices as requested 00:25:14.053 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:14.053 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:14.053 09:05:30 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:14.053 09:05:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:14.053 09:05:30 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:14.053 09:05:30 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:14.053 09:05:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.053 09:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:14.053 09:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.053 09:05:30 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:14.053 00:25:14.053 real 0m27.353s 00:25:14.053 user 0m53.981s 00:25:14.053 sys 0m6.905s 00:25:14.053 09:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:14.053 09:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:14.053 ************************************ 00:25:14.053 END TEST nvmf_abort_qd_sizes 00:25:14.053 ************************************ 00:25:14.312 09:05:30 -- spdk/autotest.sh@12 -- # hostname 00:25:14.312 09:05:30 -- spdk/autotest.sh@12 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/nvmf_abort_qd_sizes.info 00:25:14.312 geninfo: WARNING: invalid characters removed from testname! 00:25:46.383 ### URING mentions in coverage after the test ###: 00:25:46.383 09:05:57 -- spdk/autotest.sh@13 -- # echo '### URING mentions in coverage after the test ###:' 00:25:46.383 09:05:57 -- spdk/autotest.sh@14 -- # cat /home/vagrant/spdk_repo/spdk/../output/nvmf_abort_qd_sizes.info 00:25:46.383 09:05:57 -- spdk/autotest.sh@14 -- # grep -i uring 00:25:46.383 09:05:57 -- spdk/autotest.sh@15 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_abort_qd_sizes.info 00:25:46.383 09:05:57 -- spdk/autotest.sh@304 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:46.383 09:05:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:46.383 09:05:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:46.383 09:05:57 -- common/autotest_common.sh@10 -- # set +x 00:25:46.383 ************************************ 00:25:46.383 START TEST keyring_file 00:25:46.383 ************************************ 00:25:46.383 09:05:57 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:46.383 * Looking for test storage... 00:25:46.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:46.383 09:05:57 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:46.383 09:05:57 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3dd9b97a-2f76-4777-857d-568f88228ebe 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=3dd9b97a-2f76-4777-857d-568f88228ebe 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.383 09:05:57 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:46.383 09:05:57 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.383 09:05:57 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.383 09:05:57 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.383 09:05:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.383 09:05:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.383 09:05:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.383 09:05:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:25:46.384 09:05:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@47 -- # : 0 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.64DTqH8lm3 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.64DTqH8lm3 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.64DTqH8lm3 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.64DTqH8lm3 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rZB1uBi0OL 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:46.384 09:05:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rZB1uBi0OL 00:25:46.384 09:05:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rZB1uBi0OL 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.rZB1uBi0OL 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=95568 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:46.384 09:05:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 95568 00:25:46.384 09:05:57 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 95568 ']' 00:25:46.384 09:05:57 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.384 09:05:57 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:46.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.384 09:05:57 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.384 09:05:57 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:46.384 09:05:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:46.384 [2024-05-15 09:05:57.479027] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:25:46.384 [2024-05-15 09:05:57.479132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95568 ] 00:25:46.384 [2024-05-15 09:05:57.617150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.384 [2024-05-15 09:05:57.692337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:25:46.384 09:05:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:46.384 [2024-05-15 09:05:58.514026] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.384 null0 00:25:46.384 [2024-05-15 09:05:58.545986] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:46.384 [2024-05-15 09:05:58.546099] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:46.384 [2024-05-15 09:05:58.546309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:46.384 [2024-05-15 09:05:58.554006] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.384 09:05:58 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:46.384 [2024-05-15 09:05:58.566001] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:25:46.384 2024/05/15 09:05:58 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:25:46.384 request: 00:25:46.384 { 00:25:46.384 "method": "nvmf_subsystem_add_listener", 00:25:46.384 "params": { 00:25:46.384 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:25:46.384 "secure_channel": false, 00:25:46.384 "listen_address": { 00:25:46.384 "trtype": "tcp", 00:25:46.384 "traddr": "127.0.0.1", 00:25:46.384 "trsvcid": "4420" 00:25:46.384 } 00:25:46.384 } 00:25:46.384 } 00:25:46.384 Got JSON-RPC error response 00:25:46.384 GoRPCClient: error on JSON-RPC call 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:46.384 09:05:58 keyring_file -- keyring/file.sh@46 -- # bperfpid=95603 00:25:46.384 09:05:58 keyring_file -- keyring/file.sh@48 -- # waitforlisten 95603 /var/tmp/bperf.sock 00:25:46.384 09:05:58 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 95603 ']' 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:46.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:46.384 [2024-05-15 09:05:58.621522] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:25:46.384 [2024-05-15 09:05:58.621622] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95603 ] 00:25:46.384 [2024-05-15 09:05:58.757975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.384 [2024-05-15 09:05:58.828968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:46.384 09:05:58 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:25:46.384 09:05:58 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.64DTqH8lm3 00:25:46.385 09:05:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.64DTqH8lm3 00:25:46.385 09:05:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rZB1uBi0OL 00:25:46.385 09:05:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rZB1uBi0OL 00:25:46.385 09:05:59 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:25:46.385 09:05:59 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:25:46.385 09:05:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:46.385 09:05:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:46.385 09:05:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:46.385 09:05:59 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.64DTqH8lm3 == \/\t\m\p\/\t\m\p\.\6\4\D\T\q\H\8\l\m\3 ]] 00:25:46.385 09:05:59 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:25:46.385 09:05:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:46.385 09:05:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:46.385 09:05:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:46.385 09:05:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:46.385 09:06:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.rZB1uBi0OL == \/\t\m\p\/\t\m\p\.\r\Z\B\1\u\B\i\0\O\L ]] 00:25:46.385 09:06:00 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:25:46.385 09:06:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:46.385 09:06:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:46.385 09:06:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:46.385 09:06:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:46.385 09:06:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:46.385 09:06:00 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:25:46.385 09:06:00 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:25:46.385 09:06:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:46.385 09:06:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:46.385 09:06:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:46.385 09:06:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:46.385 09:06:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:46.385 09:06:00 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:46.385 09:06:00 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:46.385 09:06:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:46.385 [2024-05-15 09:06:00.965420] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:46.385 nvme0n1 00:25:46.385 09:06:01 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:25:46.385 09:06:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:46.385 09:06:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:46.385 09:06:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:46.385 09:06:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:46.385 09:06:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:46.385 09:06:01 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:25:46.385 09:06:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:25:46.385 09:06:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:46.385 09:06:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:46.385 09:06:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:46.385 09:06:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:46.385 09:06:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:46.385 09:06:01 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:25:46.385 09:06:01 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:46.385 Running I/O for 1 seconds... 00:25:46.643 00:25:46.643 Latency(us) 00:25:46.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.643 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:46.643 nvme0n1 : 1.01 11181.41 43.68 0.00 0.00 11408.22 6047.19 21328.99 00:25:46.643 =================================================================================================================== 00:25:46.643 Total : 11181.41 43.68 0.00 0.00 11408.22 6047.19 21328.99 00:25:46.643 0 00:25:46.643 09:06:02 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:46.643 09:06:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:47.209 09:06:03 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:25:47.209 09:06:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:47.209 09:06:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:47.209 09:06:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:47.209 09:06:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:47.209 09:06:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:47.466 09:06:03 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:25:47.466 09:06:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:25:47.466 09:06:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:47.466 09:06:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:47.466 09:06:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:47.466 09:06:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:47.466 09:06:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:47.724 09:06:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:47.724 09:06:03 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:47.724 09:06:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:47.724 09:06:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:47.724 09:06:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:47.724 09:06:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.724 09:06:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:47.724 09:06:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:47.724 09:06:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:47.724 09:06:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:47.982 [2024-05-15 09:06:04.031025] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:47.982 [2024-05-15 09:06:04.031665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7abf10 (107): Transport endpoint is not connected 00:25:47.982 [2024-05-15 09:06:04.032653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7abf10 (9): Bad file descriptor 00:25:47.982 [2024-05-15 09:06:04.033648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:47.982 [2024-05-15 09:06:04.033675] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:47.982 [2024-05-15 09:06:04.033686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:47.982 2024/05/15 09:06:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:25:47.982 request: 00:25:47.982 { 00:25:47.982 "method": "bdev_nvme_attach_controller", 00:25:47.982 "params": { 00:25:47.982 "name": "nvme0", 00:25:47.982 "trtype": "tcp", 00:25:47.982 "traddr": "127.0.0.1", 00:25:47.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:47.982 "adrfam": "ipv4", 00:25:47.982 "trsvcid": "4420", 00:25:47.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.982 "psk": "key1" 00:25:47.982 } 00:25:47.982 } 00:25:47.982 Got JSON-RPC error response 00:25:47.982 GoRPCClient: error on JSON-RPC call 00:25:47.982 09:06:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:47.982 09:06:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:47.982 09:06:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:47.982 09:06:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:47.982 09:06:04 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:25:47.982 09:06:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:47.982 09:06:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:47.982 09:06:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:47.982 09:06:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:47.982 09:06:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:48.240 09:06:04 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:25:48.240 09:06:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:25:48.240 09:06:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:48.240 09:06:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:48.240 09:06:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:48.240 09:06:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:48.240 09:06:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:48.499 09:06:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:48.499 09:06:04 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:25:48.499 09:06:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:48.792 09:06:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:25:48.792 09:06:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:49.050 09:06:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:25:49.050 09:06:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:49.050 09:06:05 keyring_file -- keyring/file.sh@77 -- # jq length 00:25:49.309 09:06:05 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:25:49.309 09:06:05 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.64DTqH8lm3 00:25:49.309 09:06:05 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.64DTqH8lm3 00:25:49.309 09:06:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:49.309 09:06:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.64DTqH8lm3 00:25:49.309 09:06:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:49.309 09:06:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:49.309 09:06:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:49.309 09:06:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:49.309 09:06:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.64DTqH8lm3 00:25:49.309 09:06:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.64DTqH8lm3 00:25:49.568 [2024-05-15 09:06:05.724043] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.64DTqH8lm3': 0100660 00:25:49.568 [2024-05-15 09:06:05.724091] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:49.568 2024/05/15 09:06:05 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.64DTqH8lm3], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:25:49.568 request: 00:25:49.568 { 00:25:49.568 "method": "keyring_file_add_key", 00:25:49.568 "params": { 00:25:49.568 "name": "key0", 00:25:49.568 "path": "/tmp/tmp.64DTqH8lm3" 00:25:49.568 } 00:25:49.568 } 00:25:49.568 Got JSON-RPC error response 00:25:49.568 GoRPCClient: error on JSON-RPC call 00:25:49.568 09:06:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:49.568 09:06:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:49.568 09:06:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:49.568 09:06:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:49.568 09:06:05 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.64DTqH8lm3 00:25:49.568 09:06:05 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.64DTqH8lm3 00:25:49.568 09:06:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.64DTqH8lm3 00:25:49.827 09:06:06 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.64DTqH8lm3 00:25:49.827 09:06:06 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:25:49.827 09:06:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:49.827 09:06:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:49.827 09:06:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:49.827 09:06:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:49.827 09:06:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:50.394 09:06:06 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:25:50.394 09:06:06 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:50.394 09:06:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:50.394 09:06:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:50.394 09:06:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:50.394 09:06:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:50.394 09:06:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:50.394 09:06:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:50.394 09:06:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:50.394 09:06:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:50.653 [2024-05-15 09:06:06.660238] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.64DTqH8lm3': No such file or directory 00:25:50.653 [2024-05-15 09:06:06.660288] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:50.653 [2024-05-15 09:06:06.660318] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:50.653 [2024-05-15 09:06:06.660327] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:50.653 [2024-05-15 09:06:06.660337] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:50.653 2024/05/15 09:06:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:25:50.653 request: 00:25:50.653 { 00:25:50.653 "method": "bdev_nvme_attach_controller", 00:25:50.653 "params": { 00:25:50.653 "name": "nvme0", 00:25:50.653 "trtype": "tcp", 00:25:50.653 "traddr": "127.0.0.1", 00:25:50.653 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:50.653 "adrfam": "ipv4", 00:25:50.653 "trsvcid": "4420", 00:25:50.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:50.653 "psk": "key0" 00:25:50.653 } 00:25:50.653 } 00:25:50.653 Got JSON-RPC error response 00:25:50.653 GoRPCClient: error on JSON-RPC call 00:25:50.653 09:06:06 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:50.653 09:06:06 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:50.653 09:06:06 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:50.653 09:06:06 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:50.653 09:06:06 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:25:50.653 09:06:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:50.913 09:06:06 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:50.913 09:06:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:50.913 09:06:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:50.913 09:06:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:50.913 09:06:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:50.913 09:06:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:50.913 09:06:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gNvMR1Lef1 00:25:50.913 09:06:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:50.913 09:06:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:50.913 09:06:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.913 09:06:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:50.913 09:06:06 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:50.913 09:06:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:50.913 09:06:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:50.913 09:06:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gNvMR1Lef1 00:25:50.913 09:06:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gNvMR1Lef1 00:25:50.913 09:06:07 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.gNvMR1Lef1 00:25:50.913 09:06:07 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gNvMR1Lef1 00:25:50.913 09:06:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gNvMR1Lef1 00:25:51.172 09:06:07 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:51.172 09:06:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:51.430 nvme0n1 00:25:51.430 09:06:07 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:25:51.430 09:06:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:51.430 09:06:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:51.430 09:06:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:51.430 09:06:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:51.430 09:06:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:51.710 09:06:07 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:25:51.710 09:06:07 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:25:51.710 09:06:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:51.969 09:06:08 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:25:51.969 09:06:08 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:25:51.969 09:06:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:51.969 09:06:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:51.969 09:06:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:52.228 09:06:08 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:25:52.228 09:06:08 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:25:52.228 09:06:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:52.228 09:06:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:52.228 09:06:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:52.228 09:06:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:52.228 09:06:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:52.487 09:06:08 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:25:52.487 09:06:08 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:52.487 09:06:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:53.054 09:06:09 keyring_file -- keyring/file.sh@104 -- # jq length 00:25:53.054 09:06:09 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:25:53.054 09:06:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:53.313 09:06:09 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:25:53.313 09:06:09 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gNvMR1Lef1 00:25:53.313 09:06:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gNvMR1Lef1 00:25:53.580 09:06:09 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rZB1uBi0OL 00:25:53.580 09:06:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rZB1uBi0OL 00:25:53.865 09:06:09 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:53.865 09:06:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:54.124 nvme0n1 00:25:54.124 09:06:10 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:25:54.124 09:06:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:54.691 09:06:10 keyring_file -- keyring/file.sh@112 -- # config='{ 00:25:54.691 "subsystems": [ 00:25:54.691 { 00:25:54.691 "subsystem": "keyring", 00:25:54.691 "config": [ 00:25:54.691 { 00:25:54.691 "method": "keyring_file_add_key", 00:25:54.691 "params": { 00:25:54.691 "name": "key0", 00:25:54.691 "path": "/tmp/tmp.gNvMR1Lef1" 00:25:54.691 } 00:25:54.691 }, 00:25:54.691 { 00:25:54.691 "method": "keyring_file_add_key", 00:25:54.691 "params": { 00:25:54.691 "name": "key1", 00:25:54.691 "path": "/tmp/tmp.rZB1uBi0OL" 00:25:54.691 } 00:25:54.691 } 00:25:54.691 ] 00:25:54.691 }, 00:25:54.691 { 00:25:54.691 "subsystem": "iobuf", 00:25:54.691 "config": [ 00:25:54.691 { 00:25:54.691 "method": "iobuf_set_options", 00:25:54.691 "params": { 00:25:54.691 "large_bufsize": 135168, 00:25:54.691 "large_pool_count": 1024, 00:25:54.691 "small_bufsize": 8192, 00:25:54.691 "small_pool_count": 8192 00:25:54.691 } 00:25:54.691 } 00:25:54.691 ] 00:25:54.691 }, 00:25:54.691 { 00:25:54.691 "subsystem": "sock", 00:25:54.691 "config": [ 00:25:54.691 { 00:25:54.691 "method": "sock_set_default_impl", 00:25:54.691 "params": { 00:25:54.691 "impl_name": "posix" 00:25:54.691 } 00:25:54.691 }, 00:25:54.691 { 00:25:54.691 "method": "sock_impl_set_options", 00:25:54.691 "params": { 00:25:54.691 "enable_ktls": false, 00:25:54.691 "enable_placement_id": 0, 00:25:54.691 "enable_quickack": false, 00:25:54.691 "enable_recv_pipe": true, 00:25:54.691 "enable_zerocopy_send_client": false, 00:25:54.691 "enable_zerocopy_send_server": true, 00:25:54.691 "impl_name": "ssl", 00:25:54.691 "recv_buf_size": 4096, 00:25:54.691 "send_buf_size": 4096, 00:25:54.691 "tls_version": 0, 00:25:54.691 "zerocopy_threshold": 0 00:25:54.691 } 00:25:54.691 }, 00:25:54.691 { 00:25:54.691 "method": "sock_impl_set_options", 00:25:54.691 "params": { 00:25:54.691 "enable_ktls": false, 00:25:54.691 "enable_placement_id": 0, 00:25:54.691 "enable_quickack": false, 00:25:54.691 "enable_recv_pipe": true, 00:25:54.691 "enable_zerocopy_send_client": false, 00:25:54.691 "enable_zerocopy_send_server": true, 00:25:54.692 "impl_name": "posix", 00:25:54.692 "recv_buf_size": 2097152, 00:25:54.692 "send_buf_size": 2097152, 00:25:54.692 "tls_version": 0, 00:25:54.692 "zerocopy_threshold": 0 00:25:54.692 } 00:25:54.692 } 00:25:54.692 ] 00:25:54.692 }, 00:25:54.692 { 00:25:54.692 "subsystem": "vmd", 00:25:54.692 "config": [] 00:25:54.692 }, 00:25:54.692 { 00:25:54.692 "subsystem": "accel", 00:25:54.692 "config": [ 00:25:54.692 { 00:25:54.692 "method": "accel_set_options", 00:25:54.692 "params": { 00:25:54.692 "buf_count": 2048, 00:25:54.692 "large_cache_size": 16, 00:25:54.692 "sequence_count": 2048, 00:25:54.692 "small_cache_size": 128, 00:25:54.692 "task_count": 2048 00:25:54.692 } 00:25:54.692 } 00:25:54.692 ] 00:25:54.692 }, 00:25:54.692 { 00:25:54.692 "subsystem": "bdev", 00:25:54.692 "config": [ 00:25:54.692 { 00:25:54.692 "method": "bdev_set_options", 00:25:54.692 "params": { 00:25:54.692 "bdev_auto_examine": true, 00:25:54.692 "bdev_io_cache_size": 256, 00:25:54.692 "bdev_io_pool_size": 65535, 00:25:54.692 "iobuf_large_cache_size": 16, 00:25:54.692 "iobuf_small_cache_size": 128 00:25:54.692 } 00:25:54.692 }, 00:25:54.692 { 00:25:54.692 "method": "bdev_raid_set_options", 00:25:54.692 "params": { 00:25:54.692 "process_window_size_kb": 1024 00:25:54.692 } 00:25:54.692 }, 00:25:54.692 { 00:25:54.692 "method": "bdev_iscsi_set_options", 00:25:54.692 "params": { 00:25:54.692 "timeout_sec": 30 00:25:54.692 } 00:25:54.692 }, 00:25:54.692 { 00:25:54.692 "method": "bdev_nvme_set_options", 00:25:54.692 "params": { 00:25:54.692 "action_on_timeout": "none", 00:25:54.692 "allow_accel_sequence": false, 00:25:54.692 "arbitration_burst": 0, 00:25:54.692 "bdev_retry_count": 3, 00:25:54.692 "ctrlr_loss_timeout_sec": 0, 00:25:54.692 "delay_cmd_submit": true, 00:25:54.692 "dhchap_dhgroups": [ 00:25:54.692 "null", 00:25:54.692 "ffdhe2048", 00:25:54.692 "ffdhe3072", 00:25:54.692 "ffdhe4096", 00:25:54.692 "ffdhe6144", 00:25:54.692 "ffdhe8192" 00:25:54.692 ], 00:25:54.692 "dhchap_digests": [ 00:25:54.692 "sha256", 00:25:54.692 "sha384", 00:25:54.692 "sha512" 00:25:54.692 ], 00:25:54.692 "disable_auto_failback": false, 00:25:54.692 "fast_io_fail_timeout_sec": 0, 00:25:54.692 "generate_uuids": false, 00:25:54.692 "high_priority_weight": 0, 00:25:54.692 "io_path_stat": false, 00:25:54.692 "io_queue_requests": 512, 00:25:54.692 "keep_alive_timeout_ms": 10000, 00:25:54.692 "low_priority_weight": 0, 00:25:54.692 "medium_priority_weight": 0, 00:25:54.692 "nvme_adminq_poll_period_us": 10000, 00:25:54.692 "nvme_error_stat": false, 00:25:54.692 "nvme_ioq_poll_period_us": 0, 00:25:54.692 "rdma_cm_event_timeout_ms": 0, 00:25:54.692 "rdma_max_cq_size": 0, 00:25:54.692 "rdma_srq_size": 0, 00:25:54.692 "reconnect_delay_sec": 0, 00:25:54.692 "timeout_admin_us": 0, 00:25:54.692 "timeout_us": 0, 00:25:54.692 "transport_ack_timeout": 0, 00:25:54.692 "transport_retry_count": 4, 00:25:54.692 "transport_tos": 0 00:25:54.692 } 00:25:54.692 }, 00:25:54.692 { 00:25:54.692 "method": "bdev_nvme_attach_controller", 00:25:54.692 "params": { 00:25:54.692 "adrfam": "IPv4", 00:25:54.692 "ctrlr_loss_timeout_sec": 0, 00:25:54.692 "ddgst": false, 00:25:54.692 "fast_io_fail_timeout_sec": 0, 00:25:54.692 "hdgst": false, 00:25:54.692 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:54.692 "name": "nvme0", 00:25:54.692 "prchk_guard": false, 00:25:54.692 "prchk_reftag": false, 00:25:54.692 "psk": "key0", 00:25:54.692 "reconnect_delay_sec": 0, 00:25:54.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:54.692 "traddr": "127.0.0.1", 00:25:54.692 "trsvcid": "4420", 00:25:54.692 "trtype": "TCP" 00:25:54.692 } 00:25:54.692 }, 00:25:54.692 { 00:25:54.692 "method": "bdev_nvme_set_hotplug", 00:25:54.692 "params": { 00:25:54.692 "enable": false, 00:25:54.692 "period_us": 100000 00:25:54.692 } 00:25:54.692 }, 00:25:54.692 { 00:25:54.692 "method": "bdev_wait_for_examine" 00:25:54.692 } 00:25:54.692 ] 00:25:54.692 }, 00:25:54.692 { 00:25:54.692 "subsystem": "nbd", 00:25:54.692 "config": [] 00:25:54.692 } 00:25:54.692 ] 00:25:54.692 }' 00:25:54.692 09:06:10 keyring_file -- keyring/file.sh@114 -- # killprocess 95603 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 95603 ']' 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@950 -- # kill -0 95603 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@951 -- # uname 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95603 00:25:54.692 killing process with pid 95603 00:25:54.692 Received shutdown signal, test time was about 1.000000 seconds 00:25:54.692 00:25:54.692 Latency(us) 00:25:54.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.692 =================================================================================================================== 00:25:54.692 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95603' 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@965 -- # kill 95603 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@970 -- # wait 95603 00:25:54.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:54.692 09:06:10 keyring_file -- keyring/file.sh@117 -- # bperfpid=96073 00:25:54.692 09:06:10 keyring_file -- keyring/file.sh@119 -- # waitforlisten 96073 /var/tmp/bperf.sock 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 96073 ']' 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:54.692 09:06:10 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:54.692 09:06:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 09:06:10 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:25:54.692 "subsystems": [ 00:25:54.692 { 00:25:54.692 "subsystem": "keyring", 00:25:54.692 "config": [ 00:25:54.692 { 00:25:54.693 "method": "keyring_file_add_key", 00:25:54.693 "params": { 00:25:54.693 "name": "key0", 00:25:54.693 "path": "/tmp/tmp.gNvMR1Lef1" 00:25:54.693 } 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "method": "keyring_file_add_key", 00:25:54.693 "params": { 00:25:54.693 "name": "key1", 00:25:54.693 "path": "/tmp/tmp.rZB1uBi0OL" 00:25:54.693 } 00:25:54.693 } 00:25:54.693 ] 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "subsystem": "iobuf", 00:25:54.693 "config": [ 00:25:54.693 { 00:25:54.693 "method": "iobuf_set_options", 00:25:54.693 "params": { 00:25:54.693 "large_bufsize": 135168, 00:25:54.693 "large_pool_count": 1024, 00:25:54.693 "small_bufsize": 8192, 00:25:54.693 "small_pool_count": 8192 00:25:54.693 } 00:25:54.693 } 00:25:54.693 ] 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "subsystem": "sock", 00:25:54.693 "config": [ 00:25:54.693 { 00:25:54.693 "method": "sock_set_default_impl", 00:25:54.693 "params": { 00:25:54.693 "impl_name": "posix" 00:25:54.693 } 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "method": "sock_impl_set_options", 00:25:54.693 "params": { 00:25:54.693 "enable_ktls": false, 00:25:54.693 "enable_placement_id": 0, 00:25:54.693 "enable_quickack": false, 00:25:54.693 "enable_recv_pipe": true, 00:25:54.693 "enable_zerocopy_send_client": false, 00:25:54.693 "enable_zerocopy_send_server": true, 00:25:54.693 "impl_name": "ssl", 00:25:54.693 "recv_buf_size": 4096, 00:25:54.693 "send_buf_size": 4096, 00:25:54.693 "tls_version": 0, 00:25:54.693 "zerocopy_threshold": 0 00:25:54.693 } 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "method": "sock_impl_set_options", 00:25:54.693 "params": { 00:25:54.693 "enable_ktls": false, 00:25:54.693 "enable_placement_id": 0, 00:25:54.693 "enable_quickack": false, 00:25:54.693 "enable_recv_pipe": true, 00:25:54.693 "enable_zerocopy_send_client": false, 00:25:54.693 "enable_zerocopy_send_server": true, 00:25:54.693 "impl_name": "posix", 00:25:54.693 "recv_buf_size": 2097152, 00:25:54.693 "send_buf_size": 2097152, 00:25:54.693 "tls_version": 0, 00:25:54.693 "zerocopy_threshold": 0 00:25:54.693 } 00:25:54.693 } 00:25:54.693 ] 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "subsystem": "vmd", 00:25:54.693 "config": [] 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "subsystem": "accel", 00:25:54.693 "config": [ 00:25:54.693 { 00:25:54.693 "method": "accel_set_options", 00:25:54.693 "params": { 00:25:54.693 "buf_count": 2048, 00:25:54.693 "large_cache_size": 16, 00:25:54.693 "sequence_count": 2048, 00:25:54.693 "small_cache_size": 128, 00:25:54.693 "task_count": 2048 00:25:54.693 } 00:25:54.693 } 00:25:54.693 ] 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "subsystem": "bdev", 00:25:54.693 "config": [ 00:25:54.693 { 00:25:54.693 "method": "bdev_set_options", 00:25:54.693 "params": { 00:25:54.693 "bdev_auto_examine": true, 00:25:54.693 "bdev_io_cache_size": 256, 00:25:54.693 "bdev_io_pool_size": 65535, 00:25:54.693 "iobuf_large_cache_size": 16, 00:25:54.693 "iobuf_small_cache_size": 128 00:25:54.693 } 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "method": "bdev_raid_set_options", 00:25:54.693 "params": { 00:25:54.693 "process_window_size_kb": 1024 00:25:54.693 } 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "method": "bdev_iscsi_set_options", 00:25:54.693 "params": { 00:25:54.693 "timeout_sec": 30 00:25:54.693 } 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "method": "bdev_nvme_set_options", 00:25:54.693 "params": { 00:25:54.693 "action_on_timeout": "none", 00:25:54.693 "allow_accel_sequence": false, 00:25:54.693 "arbitration_burst": 0, 00:25:54.693 "bdev_retry_count": 3, 00:25:54.693 "ctrlr_loss_timeout_sec": 0, 00:25:54.693 "delay_cmd_submit": true, 00:25:54.693 "dhchap_dhgroups": [ 00:25:54.693 "null", 00:25:54.693 "ffdhe2048", 00:25:54.693 "ffdhe3072", 00:25:54.693 "ffdhe4096", 00:25:54.693 "ffdhe6144", 00:25:54.693 "ffdhe8192" 00:25:54.693 ], 00:25:54.693 "dhchap_digests": [ 00:25:54.693 "sha256", 00:25:54.693 "sha384", 00:25:54.693 "sha512" 00:25:54.693 ], 00:25:54.693 "disable_auto_failback": false, 00:25:54.693 "fast_io_fail_timeout_sec": 0, 00:25:54.693 "generate_uuids": false, 00:25:54.693 "high_priority_weight": 0, 00:25:54.693 "io_path_stat": false, 00:25:54.693 "io_queue_requests": 512, 00:25:54.693 "keep_alive_timeout_ms": 10000, 00:25:54.693 "low_priority_weight": 0, 00:25:54.693 "medium_priority_weight": 0, 00:25:54.693 "nvme_adminq_poll_period_us": 10000, 00:25:54.693 "nvme_error_stat": false, 00:25:54.693 "nvme_ioq_poll_period_us": 0, 00:25:54.693 "rdma_cm_event_timeout_ms": 0, 00:25:54.693 "rdma_max_cq_size": 0, 00:25:54.693 "rdma_srq_size": 0, 00:25:54.693 "reconnect_delay_sec": 0, 00:25:54.693 "timeout_admin_us": 0, 00:25:54.693 "timeout_us": 0, 00:25:54.693 "transport_ack_timeout": 0, 00:25:54.693 "transport_retry_count": 4, 00:25:54.693 "transport_tos": 0 00:25:54.693 } 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "method": "bdev_nvme_attach_controller", 00:25:54.693 "params": { 00:25:54.693 "adrfam": "IPv4", 00:25:54.693 "ctrlr_loss_timeout_sec": 0, 00:25:54.693 "ddgst": false, 00:25:54.693 "fast_io_fail_timeout_sec": 0, 00:25:54.693 "hdgst": false, 00:25:54.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:54.693 "name": "nvme0", 00:25:54.693 "prchk_guard": false, 00:25:54.693 "prchk_reftag": false, 00:25:54.693 "psk": "key0", 00:25:54.693 "reconnect_delay_sec": 0, 00:25:54.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:54.693 "traddr": "127.0.0.1", 00:25:54.693 "trsvcid": "4420", 00:25:54.693 "trtype": "TCP" 00:25:54.693 } 00:25:54.693 }, 00:25:54.693 { 00:25:54.693 "method": "bdev_nvme_set_hotplug", 00:25:54.693 "params": { 00:25:54.693 "enable": false, 00:25:54.693 "period_us": 100000 00:25:54.693 } 00:25:54.693 }, 00:25:54.693 { 00:25:54.694 "method": "bdev_wait_for_examine" 00:25:54.694 } 00:25:54.694 ] 00:25:54.694 }, 00:25:54.694 { 00:25:54.694 "subsystem": "nbd", 00:25:54.694 "config": [] 00:25:54.694 } 00:25:54.694 ] 00:25:54.694 }' 00:25:54.694 [2024-05-15 09:06:10.904447] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:25:54.694 [2024-05-15 09:06:10.904587] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96073 ] 00:25:54.951 [2024-05-15 09:06:11.051279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.951 [2024-05-15 09:06:11.112289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.210 [2024-05-15 09:06:11.253004] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:55.777 09:06:11 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:55.777 09:06:11 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:25:55.777 09:06:11 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:25:55.777 09:06:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:55.777 09:06:11 keyring_file -- keyring/file.sh@120 -- # jq length 00:25:56.036 09:06:12 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:25:56.036 09:06:12 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:25:56.036 09:06:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:56.036 09:06:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:56.036 09:06:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:56.036 09:06:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:56.036 09:06:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:56.294 09:06:12 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:56.294 09:06:12 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:25:56.294 09:06:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:56.294 09:06:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:56.294 09:06:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:56.294 09:06:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:56.294 09:06:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:56.553 09:06:12 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:25:56.553 09:06:12 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:25:56.553 09:06:12 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:25:56.553 09:06:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:57.119 09:06:13 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:25:57.119 09:06:13 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:57.119 09:06:13 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.gNvMR1Lef1 /tmp/tmp.rZB1uBi0OL 00:25:57.119 09:06:13 keyring_file -- keyring/file.sh@20 -- # killprocess 96073 00:25:57.119 09:06:13 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 96073 ']' 00:25:57.119 09:06:13 keyring_file -- common/autotest_common.sh@950 -- # kill -0 96073 00:25:57.119 09:06:13 keyring_file -- common/autotest_common.sh@951 -- # uname 00:25:57.119 09:06:13 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:57.119 09:06:13 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96073 00:25:57.119 09:06:13 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:57.119 killing process with pid 96073 00:25:57.119 09:06:13 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:57.119 09:06:13 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96073' 00:25:57.119 Received shutdown signal, test time was about 1.000000 seconds 00:25:57.119 00:25:57.120 Latency(us) 00:25:57.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.120 =================================================================================================================== 00:25:57.120 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@965 -- # kill 96073 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@970 -- # wait 96073 00:25:57.120 09:06:13 keyring_file -- keyring/file.sh@21 -- # killprocess 95568 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 95568 ']' 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@950 -- # kill -0 95568 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@951 -- # uname 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95568 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:57.120 killing process with pid 95568 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95568' 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@965 -- # kill 95568 00:25:57.120 [2024-05-15 09:06:13.305030] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:57.120 [2024-05-15 09:06:13.305071] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:57.120 09:06:13 keyring_file -- common/autotest_common.sh@970 -- # wait 95568 00:25:57.378 00:25:57.378 real 0m16.402s 00:25:57.378 user 0m41.982s 00:25:57.378 sys 0m3.114s 00:25:57.378 09:06:13 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:57.378 09:06:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:57.378 ************************************ 00:25:57.378 END TEST keyring_file 00:25:57.378 ************************************ 00:25:57.636 09:06:13 -- spdk/autotest.sh@305 -- # [[ n == y ]] 00:25:57.636 09:06:13 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:25:57.636 09:06:13 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:25:57.636 09:06:13 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:25:57.636 09:06:13 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:25:57.636 09:06:13 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:57.636 09:06:13 -- spdk/autotest.sh@344 -- # '[' 0 -eq 1 ']' 00:25:57.636 09:06:13 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:25:57.636 09:06:13 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:57.636 09:06:13 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:25:57.636 09:06:13 -- spdk/autotest.sh@361 -- # '[' 0 -eq 1 ']' 00:25:57.636 09:06:13 -- spdk/autotest.sh@365 -- # '[' 0 -eq 1 ']' 00:25:57.636 09:06:13 -- spdk/autotest.sh@372 -- # [[ 0 -eq 1 ]] 00:25:57.636 09:06:13 -- spdk/autotest.sh@376 -- # [[ 0 -eq 1 ]] 00:25:57.636 09:06:13 -- spdk/autotest.sh@380 -- # [[ 0 -eq 1 ]] 00:25:57.636 09:06:13 -- spdk/autotest.sh@384 -- # [[ 0 -eq 1 ]] 00:25:57.636 09:06:13 -- spdk/autotest.sh@389 -- # trap - SIGINT SIGTERM EXIT 00:25:57.636 09:06:13 -- spdk/autotest.sh@391 -- # timing_enter post_cleanup 00:25:57.636 09:06:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:57.636 09:06:13 -- common/autotest_common.sh@10 -- # set +x 00:25:57.636 09:06:13 -- spdk/autotest.sh@392 -- # autotest_cleanup 00:25:57.636 09:06:13 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:25:57.636 09:06:13 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:25:57.636 09:06:13 -- common/autotest_common.sh@10 -- # set +x 00:25:59.012 INFO: APP EXITING 00:25:59.012 INFO: killing all VMs 00:25:59.012 INFO: killing vhost app 00:25:59.012 INFO: EXIT DONE 00:25:59.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:59.579 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:59.579 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:26:00.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:00.515 Cleaning 00:26:00.515 Removing: /var/run/dpdk/spdk0/config 00:26:00.515 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:00.515 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:00.515 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:00.515 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:00.515 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:00.515 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:00.515 Removing: /var/run/dpdk/spdk1/config 00:26:00.515 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:00.515 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:00.515 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:00.515 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:00.515 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:00.515 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:00.515 Removing: /var/run/dpdk/spdk2/config 00:26:00.515 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:00.515 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:00.515 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:00.515 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:00.515 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:00.515 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:00.515 Removing: /var/run/dpdk/spdk3/config 00:26:00.515 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:00.515 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:00.515 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:00.515 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:00.515 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:00.515 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:00.515 Removing: /var/run/dpdk/spdk4/config 00:26:00.515 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:00.515 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:00.515 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:00.515 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:00.515 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:00.515 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:00.515 Removing: /dev/shm/nvmf_trace.0 00:26:00.515 Removing: /dev/shm/spdk_tgt_trace.pid59985 00:26:00.515 Removing: /var/run/dpdk/spdk0 00:26:00.515 Removing: /var/run/dpdk/spdk1 00:26:00.515 Removing: /var/run/dpdk/spdk2 00:26:00.515 Removing: /var/run/dpdk/spdk3 00:26:00.515 Removing: /var/run/dpdk/spdk4 00:26:00.515 Removing: /var/run/dpdk/spdk_pid59840 00:26:00.515 Removing: /var/run/dpdk/spdk_pid59985 00:26:00.515 Removing: /var/run/dpdk/spdk_pid60227 00:26:00.515 Removing: /var/run/dpdk/spdk_pid60324 00:26:00.515 Removing: /var/run/dpdk/spdk_pid60359 00:26:00.515 Removing: /var/run/dpdk/spdk_pid60469 00:26:00.515 Removing: /var/run/dpdk/spdk_pid60499 00:26:00.515 Removing: /var/run/dpdk/spdk_pid60617 00:26:00.515 Removing: /var/run/dpdk/spdk_pid60897 00:26:00.515 Removing: /var/run/dpdk/spdk_pid61066 00:26:00.515 Removing: /var/run/dpdk/spdk_pid61148 00:26:00.515 Removing: /var/run/dpdk/spdk_pid61240 00:26:00.515 Removing: /var/run/dpdk/spdk_pid61330 00:26:00.515 Removing: /var/run/dpdk/spdk_pid61367 00:26:00.515 Removing: /var/run/dpdk/spdk_pid61398 00:26:00.515 Removing: /var/run/dpdk/spdk_pid61460 00:26:00.515 Removing: /var/run/dpdk/spdk_pid61577 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62200 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62263 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62327 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62355 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62434 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62449 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62528 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62542 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62594 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62610 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62662 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62692 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62844 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62874 00:26:00.515 Removing: /var/run/dpdk/spdk_pid62948 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63019 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63049 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63102 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63142 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63171 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63200 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63240 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63269 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63309 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63338 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63379 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63409 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63444 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63478 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63513 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63547 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63582 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63611 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63651 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63683 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63726 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63755 00:26:00.515 Removing: /var/run/dpdk/spdk_pid63785 00:26:00.774 Removing: /var/run/dpdk/spdk_pid63855 00:26:00.774 Removing: /var/run/dpdk/spdk_pid63966 00:26:00.774 Removing: /var/run/dpdk/spdk_pid64345 00:26:00.774 Removing: /var/run/dpdk/spdk_pid67678 00:26:00.774 Removing: /var/run/dpdk/spdk_pid68008 00:26:00.774 Removing: /var/run/dpdk/spdk_pid70430 00:26:00.774 Removing: /var/run/dpdk/spdk_pid70804 00:26:00.774 Removing: /var/run/dpdk/spdk_pid71053 00:26:00.774 Removing: /var/run/dpdk/spdk_pid71100 00:26:00.774 Removing: /var/run/dpdk/spdk_pid71965 00:26:00.774 Removing: /var/run/dpdk/spdk_pid72015 00:26:00.774 Removing: /var/run/dpdk/spdk_pid72370 00:26:00.774 Removing: /var/run/dpdk/spdk_pid72899 00:26:00.774 Removing: /var/run/dpdk/spdk_pid73337 00:26:00.774 Removing: /var/run/dpdk/spdk_pid74306 00:26:00.774 Removing: /var/run/dpdk/spdk_pid75264 00:26:00.774 Removing: /var/run/dpdk/spdk_pid75382 00:26:00.774 Removing: /var/run/dpdk/spdk_pid75446 00:26:00.774 Removing: /var/run/dpdk/spdk_pid76889 00:26:00.774 Removing: /var/run/dpdk/spdk_pid77121 00:26:00.774 Removing: /var/run/dpdk/spdk_pid77546 00:26:00.774 Removing: /var/run/dpdk/spdk_pid77655 00:26:00.774 Removing: /var/run/dpdk/spdk_pid77793 00:26:00.774 Removing: /var/run/dpdk/spdk_pid77825 00:26:00.774 Removing: /var/run/dpdk/spdk_pid77857 00:26:00.774 Removing: /var/run/dpdk/spdk_pid77897 00:26:00.774 Removing: /var/run/dpdk/spdk_pid78062 00:26:00.774 Removing: /var/run/dpdk/spdk_pid78215 00:26:00.774 Removing: /var/run/dpdk/spdk_pid78471 00:26:00.774 Removing: /var/run/dpdk/spdk_pid78595 00:26:00.774 Removing: /var/run/dpdk/spdk_pid78843 00:26:00.774 Removing: /var/run/dpdk/spdk_pid78968 00:26:00.774 Removing: /var/run/dpdk/spdk_pid79089 00:26:00.774 Removing: /var/run/dpdk/spdk_pid79425 00:26:00.774 Removing: /var/run/dpdk/spdk_pid79841 00:26:00.774 Removing: /var/run/dpdk/spdk_pid80132 00:26:00.774 Removing: /var/run/dpdk/spdk_pid80604 00:26:00.774 Removing: /var/run/dpdk/spdk_pid80612 00:26:00.774 Removing: /var/run/dpdk/spdk_pid80957 00:26:00.774 Removing: /var/run/dpdk/spdk_pid80978 00:26:00.774 Removing: /var/run/dpdk/spdk_pid80992 00:26:00.774 Removing: /var/run/dpdk/spdk_pid81025 00:26:00.774 Removing: /var/run/dpdk/spdk_pid81030 00:26:00.774 Removing: /var/run/dpdk/spdk_pid81324 00:26:00.774 Removing: /var/run/dpdk/spdk_pid81368 00:26:00.774 Removing: /var/run/dpdk/spdk_pid81684 00:26:00.774 Removing: /var/run/dpdk/spdk_pid81921 00:26:00.774 Removing: /var/run/dpdk/spdk_pid82405 00:26:00.775 Removing: /var/run/dpdk/spdk_pid82973 00:26:00.775 Removing: /var/run/dpdk/spdk_pid84365 00:26:00.775 Removing: /var/run/dpdk/spdk_pid84942 00:26:00.775 Removing: /var/run/dpdk/spdk_pid84944 00:26:00.775 Removing: /var/run/dpdk/spdk_pid86949 00:26:00.775 Removing: /var/run/dpdk/spdk_pid87044 00:26:00.775 Removing: /var/run/dpdk/spdk_pid87130 00:26:00.775 Removing: /var/run/dpdk/spdk_pid87201 00:26:00.775 Removing: /var/run/dpdk/spdk_pid87364 00:26:00.775 Removing: /var/run/dpdk/spdk_pid87454 00:26:00.775 Removing: /var/run/dpdk/spdk_pid87531 00:26:00.775 Removing: /var/run/dpdk/spdk_pid87619 00:26:00.775 Removing: /var/run/dpdk/spdk_pid87965 00:26:00.775 Removing: /var/run/dpdk/spdk_pid88640 00:26:00.775 Removing: /var/run/dpdk/spdk_pid89992 00:26:00.775 Removing: /var/run/dpdk/spdk_pid90183 00:26:00.775 Removing: /var/run/dpdk/spdk_pid90470 00:26:00.775 Removing: /var/run/dpdk/spdk_pid91350 00:26:00.775 Removing: /var/run/dpdk/spdk_pid91893 00:26:00.775 Removing: /var/run/dpdk/spdk_pid91898 00:26:00.775 Removing: /var/run/dpdk/spdk_pid92240 00:26:00.775 Removing: /var/run/dpdk/spdk_pid92390 00:26:00.775 Removing: /var/run/dpdk/spdk_pid92548 00:26:00.775 Removing: /var/run/dpdk/spdk_pid92645 00:26:00.775 Removing: /var/run/dpdk/spdk_pid92795 00:26:00.775 Removing: /var/run/dpdk/spdk_pid92904 00:26:00.775 Removing: /var/run/dpdk/spdk_pid94180 00:26:00.775 Removing: /var/run/dpdk/spdk_pid94210 00:26:00.775 Removing: /var/run/dpdk/spdk_pid94245 00:26:00.775 Removing: /var/run/dpdk/spdk_pid94495 00:26:00.775 Removing: /var/run/dpdk/spdk_pid94528 00:26:00.775 Removing: /var/run/dpdk/spdk_pid94558 00:26:00.775 Removing: /var/run/dpdk/spdk_pid95568 00:26:00.775 Removing: /var/run/dpdk/spdk_pid95603 00:26:00.775 Removing: /var/run/dpdk/spdk_pid96073 00:26:00.775 Clean 00:26:01.033 09:06:17 -- common/autotest_common.sh@1447 -- # return 0 00:26:01.033 09:06:17 -- spdk/autotest.sh@393 -- # timing_exit post_cleanup 00:26:01.033 09:06:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.033 09:06:17 -- common/autotest_common.sh@10 -- # set +x 00:26:01.033 09:06:17 -- spdk/autotest.sh@395 -- # timing_exit autotest 00:26:01.033 09:06:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.033 09:06:17 -- common/autotest_common.sh@10 -- # set +x 00:26:01.033 09:06:17 -- spdk/autotest.sh@396 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:01.033 09:06:17 -- spdk/autotest.sh@398 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:01.034 09:06:17 -- spdk/autotest.sh@398 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:01.034 09:06:17 -- spdk/autotest.sh@400 -- # hash lcov 00:26:01.034 09:06:17 -- spdk/autotest.sh@400 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:01.034 09:06:17 -- spdk/autotest.sh@402 -- # hostname 00:26:01.034 09:06:17 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:01.293 geninfo: WARNING: invalid characters removed from testname! 00:26:27.862 09:06:43 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:32.109 09:06:47 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:34.640 09:06:50 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:37.190 09:06:53 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:40.501 09:06:55 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:43.032 09:06:58 -- spdk/autotest.sh@408 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:45.612 09:07:01 -- spdk/autotest.sh@409 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:45.612 09:07:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:45.613 09:07:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:45.613 09:07:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.613 09:07:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.613 09:07:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.613 09:07:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.613 09:07:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.613 09:07:01 -- paths/export.sh@5 -- $ export PATH 00:26:45.613 09:07:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.613 09:07:01 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:45.613 09:07:01 -- common/autobuild_common.sh@437 -- $ date +%s 00:26:45.613 09:07:01 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715764021.XXXXXX 00:26:45.613 09:07:01 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715764021.CUIwUh 00:26:45.613 09:07:01 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:26:45.613 09:07:01 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:26:45.613 09:07:01 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:26:45.613 09:07:01 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:45.613 09:07:01 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:45.613 09:07:01 -- common/autobuild_common.sh@453 -- $ get_config_params 00:26:45.613 09:07:01 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:26:45.613 09:07:01 -- common/autotest_common.sh@10 -- $ set +x 00:26:45.613 09:07:01 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:26:45.613 09:07:01 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:26:45.613 09:07:01 -- pm/common@17 -- $ local monitor 00:26:45.613 09:07:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:45.613 09:07:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:45.613 09:07:01 -- pm/common@25 -- $ sleep 1 00:26:45.613 09:07:01 -- pm/common@21 -- $ date +%s 00:26:45.613 09:07:01 -- pm/common@21 -- $ date +%s 00:26:45.613 09:07:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715764021 00:26:45.613 09:07:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715764021 00:26:45.613 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715764021_collect-vmstat.pm.log 00:26:45.613 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715764021_collect-cpu-load.pm.log 00:26:46.547 09:07:02 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:26:46.547 09:07:02 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:26:46.547 09:07:02 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:26:46.547 09:07:02 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:26:46.547 09:07:02 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:26:46.547 09:07:02 -- spdk/autopackage.sh@19 -- $ timing_finish 00:26:46.547 09:07:02 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:46.547 09:07:02 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:26:46.547 09:07:02 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:46.547 09:07:02 -- spdk/autopackage.sh@20 -- $ exit 0 00:26:46.547 09:07:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:46.547 09:07:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:46.547 09:07:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:46.547 09:07:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:46.547 09:07:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:46.547 09:07:02 -- pm/common@44 -- $ pid=97709 00:26:46.547 09:07:02 -- pm/common@50 -- $ kill -TERM 97709 00:26:46.547 09:07:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:46.547 09:07:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:46.547 09:07:02 -- pm/common@44 -- $ pid=97711 00:26:46.547 09:07:02 -- pm/common@50 -- $ kill -TERM 97711 00:26:46.547 + [[ -n 5151 ]] 00:26:46.547 + sudo kill 5151 00:26:46.555 [Pipeline] } 00:26:46.572 [Pipeline] // timeout 00:26:46.576 [Pipeline] } 00:26:46.591 [Pipeline] // stage 00:26:46.595 [Pipeline] } 00:26:46.609 [Pipeline] // catchError 00:26:46.615 [Pipeline] stage 00:26:46.617 [Pipeline] { (Stop VM) 00:26:46.630 [Pipeline] sh 00:26:46.905 + vagrant halt 00:26:51.091 ==> default: Halting domain... 00:26:56.445 [Pipeline] sh 00:26:56.721 + vagrant destroy -f 00:27:00.909 ==> default: Removing domain... 00:27:00.921 [Pipeline] sh 00:27:01.201 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:01.209 [Pipeline] } 00:27:01.226 [Pipeline] // stage 00:27:01.231 [Pipeline] } 00:27:01.248 [Pipeline] // dir 00:27:01.254 [Pipeline] } 00:27:01.271 [Pipeline] // wrap 00:27:01.277 [Pipeline] } 00:27:01.293 [Pipeline] // catchError 00:27:01.302 [Pipeline] stage 00:27:01.304 [Pipeline] { (Epilogue) 00:27:01.317 [Pipeline] sh 00:27:01.595 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:08.168 [Pipeline] catchError 00:27:08.170 [Pipeline] { 00:27:08.184 [Pipeline] sh 00:27:08.536 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:08.794 Artifacts sizes are good 00:27:08.802 [Pipeline] } 00:27:08.816 [Pipeline] // catchError 00:27:08.827 [Pipeline] archiveArtifacts 00:27:08.834 Archiving artifacts 00:27:09.000 [Pipeline] cleanWs 00:27:09.010 [WS-CLEANUP] Deleting project workspace... 00:27:09.010 [WS-CLEANUP] Deferred wipeout is used... 00:27:09.016 [WS-CLEANUP] done 00:27:09.018 [Pipeline] } 00:27:09.036 [Pipeline] // stage 00:27:09.042 [Pipeline] } 00:27:09.058 [Pipeline] // node 00:27:09.064 [Pipeline] End of Pipeline 00:27:09.098 Finished: SUCCESS